00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1908 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3169 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.044 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.045 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.078 Fetching changes from the remote Git repository 00:00:00.079 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.115 Using shallow fetch with depth 1 00:00:00.115 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.115 > git --version # timeout=10 00:00:00.166 > git --version # 'git version 2.39.2' 00:00:00.166 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.206 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.206 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.336 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.346 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.357 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:05.357 > git config core.sparsecheckout # timeout=10 00:00:05.367 > git read-tree -mu HEAD # timeout=10 00:00:05.381 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:05.400 Commit message: "pool: fixes for VisualBuild class" 00:00:05.400 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:05.510 [Pipeline] Start of Pipeline 00:00:05.525 [Pipeline] library 00:00:05.527 Loading library shm_lib@master 00:00:05.527 Library shm_lib@master is cached. Copying from home. 00:00:05.541 [Pipeline] node 00:00:05.553 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:05.555 [Pipeline] { 00:00:05.568 [Pipeline] catchError 00:00:05.570 [Pipeline] { 00:00:05.586 [Pipeline] wrap 00:00:05.597 [Pipeline] { 00:00:05.606 [Pipeline] stage 00:00:05.608 [Pipeline] { (Prologue) 00:00:05.630 [Pipeline] echo 00:00:05.632 Node: VM-host-SM16 00:00:05.638 [Pipeline] cleanWs 00:00:05.647 [WS-CLEANUP] Deleting project workspace... 00:00:05.647 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.653 [WS-CLEANUP] done 00:00:05.806 [Pipeline] setCustomBuildProperty 00:00:05.873 [Pipeline] nodesByLabel 00:00:05.874 Found a total of 2 nodes with the 'sorcerer' label 00:00:05.883 [Pipeline] httpRequest 00:00:05.886 HttpMethod: GET 00:00:05.887 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.887 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.896 Response Code: HTTP/1.1 200 OK 00:00:05.896 Success: Status code 200 is in the accepted range: 200,404 00:00:05.897 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.691 [Pipeline] sh 00:00:07.969 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.981 [Pipeline] httpRequest 00:00:07.984 HttpMethod: GET 00:00:07.984 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:07.985 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:08.000 Response Code: HTTP/1.1 200 OK 00:00:08.000 Success: Status code 200 is in the accepted range: 200,404 00:00:08.001 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:22.265 [Pipeline] sh 00:01:22.545 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:25.090 [Pipeline] sh 00:01:25.372 + git -C spdk log --oneline -n5 00:01:25.372 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:25.372 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:01:25.372 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:25.372 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:01:25.372 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:01:25.391 [Pipeline] writeFile 00:01:25.407 [Pipeline] sh 00:01:25.689 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.702 [Pipeline] sh 00:01:25.983 + cat autorun-spdk.conf 00:01:25.983 SPDK_TEST_UNITTEST=1 00:01:25.983 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.983 SPDK_TEST_NVME=1 00:01:25.983 SPDK_TEST_BLOCKDEV=1 00:01:25.983 SPDK_RUN_ASAN=1 00:01:25.983 SPDK_RUN_UBSAN=1 00:01:25.983 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.990 RUN_NIGHTLY=1 00:01:25.992 [Pipeline] } 00:01:26.008 [Pipeline] // stage 00:01:26.023 [Pipeline] stage 00:01:26.025 [Pipeline] { (Run VM) 00:01:26.039 [Pipeline] sh 00:01:26.323 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:26.323 + echo 'Start stage prepare_nvme.sh' 00:01:26.323 Start stage prepare_nvme.sh 00:01:26.323 + [[ -n 2 ]] 00:01:26.323 + disk_prefix=ex2 00:01:26.323 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:26.323 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:26.323 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:26.323 ++ SPDK_TEST_UNITTEST=1 00:01:26.323 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.323 ++ SPDK_TEST_NVME=1 00:01:26.323 ++ SPDK_TEST_BLOCKDEV=1 00:01:26.323 ++ SPDK_RUN_ASAN=1 00:01:26.323 ++ SPDK_RUN_UBSAN=1 00:01:26.323 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.323 ++ RUN_NIGHTLY=1 00:01:26.323 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:26.323 + nvme_files=() 00:01:26.323 + declare -A nvme_files 00:01:26.323 + backend_dir=/var/lib/libvirt/images/backends 00:01:26.323 + nvme_files['nvme.img']=5G 00:01:26.323 + nvme_files['nvme-cmb.img']=5G 00:01:26.323 + nvme_files['nvme-multi0.img']=4G 00:01:26.323 + nvme_files['nvme-multi1.img']=4G 00:01:26.323 + nvme_files['nvme-multi2.img']=4G 00:01:26.323 + nvme_files['nvme-openstack.img']=8G 00:01:26.323 + nvme_files['nvme-zns.img']=5G 00:01:26.323 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:26.323 + (( SPDK_TEST_FTL == 1 )) 00:01:26.323 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:26.323 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:26.323 + for nvme in "${!nvme_files[@]}" 00:01:26.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:26.323 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.323 + for nvme in "${!nvme_files[@]}" 00:01:26.323 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:26.892 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.892 + for nvme in "${!nvme_files[@]}" 00:01:26.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:26.892 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:26.892 + for nvme in "${!nvme_files[@]}" 00:01:26.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:26.892 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.892 + for nvme in "${!nvme_files[@]}" 00:01:26.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:26.892 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.892 + for nvme in "${!nvme_files[@]}" 00:01:26.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:26.892 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.892 + for nvme in "${!nvme_files[@]}" 00:01:26.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:27.829 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:27.829 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:27.829 + echo 'End stage prepare_nvme.sh' 00:01:27.829 End stage prepare_nvme.sh 00:01:27.841 [Pipeline] sh 00:01:28.123 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:28.123 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f ubuntu2204 00:01:28.123 00:01:28.123 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:28.123 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:28.123 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:28.123 HELP=0 00:01:28.123 DRY_RUN=0 00:01:28.123 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:01:28.123 NVME_DISKS_TYPE=nvme, 00:01:28.123 NVME_AUTO_CREATE=0 00:01:28.123 NVME_DISKS_NAMESPACES=, 00:01:28.123 NVME_CMB=, 00:01:28.123 NVME_PMR=, 00:01:28.123 NVME_ZNS=, 00:01:28.123 NVME_MS=, 00:01:28.123 NVME_FDP=, 00:01:28.123 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:28.123 SPDK_VAGRANT_VMCPU=10 00:01:28.123 SPDK_VAGRANT_VMRAM=12288 00:01:28.123 SPDK_VAGRANT_PROVIDER=libvirt 00:01:28.123 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:28.123 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:28.123 SPDK_OPENSTACK_NETWORK=0 00:01:28.123 VAGRANT_PACKAGE_BOX=0 00:01:28.123 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:28.123 FORCE_DISTRO=true 00:01:28.123 VAGRANT_BOX_VERSION= 00:01:28.123 EXTRA_VAGRANTFILES= 00:01:28.123 NIC_MODEL=e1000 00:01:28.123 00:01:28.123 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:28.123 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:30.679 Bringing machine 'default' up with 'libvirt' provider... 00:01:31.247 ==> default: Creating image (snapshot of base box volume). 00:01:31.506 ==> default: Creating domain with the following settings... 00:01:31.506 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1717965839_459bde521c6fe4e3797e 00:01:31.506 ==> default: -- Domain type: kvm 00:01:31.506 ==> default: -- Cpus: 10 00:01:31.506 ==> default: -- Feature: acpi 00:01:31.506 ==> default: -- Feature: apic 00:01:31.506 ==> default: -- Feature: pae 00:01:31.506 ==> default: -- Memory: 12288M 00:01:31.506 ==> default: -- Memory Backing: hugepages: 00:01:31.506 ==> default: -- Management MAC: 00:01:31.506 ==> default: -- Loader: 00:01:31.506 ==> default: -- Nvram: 00:01:31.506 ==> default: -- Base box: spdk/ubuntu2204 00:01:31.506 ==> default: -- Storage pool: default 00:01:31.506 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1717965839_459bde521c6fe4e3797e.img (20G) 00:01:31.506 ==> default: -- Volume Cache: default 00:01:31.506 ==> default: -- Kernel: 00:01:31.506 ==> default: -- Initrd: 00:01:31.506 ==> default: -- Graphics Type: vnc 00:01:31.506 ==> default: -- Graphics Port: -1 00:01:31.506 ==> default: -- Graphics IP: 127.0.0.1 00:01:31.506 ==> default: -- Graphics Password: Not defined 00:01:31.506 ==> default: -- Video Type: cirrus 00:01:31.506 ==> default: -- Video VRAM: 9216 00:01:31.506 ==> default: -- Sound Type: 00:01:31.506 ==> default: -- Keymap: en-us 00:01:31.506 ==> default: -- TPM Path: 00:01:31.506 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:31.506 ==> default: -- Command line args: 00:01:31.506 ==> default: -> value=-device, 00:01:31.506 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:31.506 ==> default: -> value=-drive, 00:01:31.506 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:31.506 ==> default: -> value=-device, 00:01:31.506 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:31.506 ==> default: Creating shared folders metadata... 00:01:31.506 ==> default: Starting domain. 00:01:34.032 ==> default: Waiting for domain to get an IP address... 00:01:44.006 ==> default: Waiting for SSH to become available... 00:01:44.572 ==> default: Configuring and enabling network interfaces... 00:01:48.761 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:54.031 ==> default: Mounting SSHFS shared folder... 00:01:54.968 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:54.968 ==> default: Checking Mount.. 00:01:55.904 ==> default: Folder Successfully Mounted! 00:01:55.904 ==> default: Running provisioner: file... 00:01:56.175 default: ~/.gitconfig => .gitconfig 00:01:56.446 00:01:56.446 SUCCESS! 00:01:56.446 00:01:56.446 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:56.446 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:56.446 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:56.446 00:01:56.455 [Pipeline] } 00:01:56.472 [Pipeline] // stage 00:01:56.482 [Pipeline] dir 00:01:56.482 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:01:56.484 [Pipeline] { 00:01:56.498 [Pipeline] catchError 00:01:56.500 [Pipeline] { 00:01:56.514 [Pipeline] sh 00:01:56.794 + vagrant ssh-config --host vagrant 00:01:56.794 + sed -ne /^Host/,$p 00:01:56.794 + tee ssh_conf 00:02:00.981 Host vagrant 00:02:00.981 HostName 192.168.121.145 00:02:00.981 User vagrant 00:02:00.981 Port 22 00:02:00.981 UserKnownHostsFile /dev/null 00:02:00.981 StrictHostKeyChecking no 00:02:00.981 PasswordAuthentication no 00:02:00.981 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:00.981 IdentitiesOnly yes 00:02:00.981 LogLevel FATAL 00:02:00.981 ForwardAgent yes 00:02:00.981 ForwardX11 yes 00:02:00.981 00:02:00.994 [Pipeline] withEnv 00:02:00.997 [Pipeline] { 00:02:01.010 [Pipeline] sh 00:02:01.286 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:01.286 source /etc/os-release 00:02:01.286 [[ -e /image.version ]] && img=$(< /image.version) 00:02:01.286 # Minimal, systemd-like check. 00:02:01.286 if [[ -e /.dockerenv ]]; then 00:02:01.286 # Clear garbage from the node's name: 00:02:01.286 # agt-er_autotest_547-896 -> autotest_547-896 00:02:01.286 # $HOSTNAME is the actual container id 00:02:01.286 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:01.286 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:01.286 # We can assume this is a mount from a host where container is running, 00:02:01.286 # so fetch its hostname to easily identify the target swarm worker. 00:02:01.286 container="$(< /etc/hostname) ($agent)" 00:02:01.286 else 00:02:01.286 # Fallback 00:02:01.286 container=$agent 00:02:01.286 fi 00:02:01.286 fi 00:02:01.286 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:01.286 00:02:01.556 [Pipeline] } 00:02:01.576 [Pipeline] // withEnv 00:02:01.585 [Pipeline] setCustomBuildProperty 00:02:01.601 [Pipeline] stage 00:02:01.603 [Pipeline] { (Tests) 00:02:01.622 [Pipeline] sh 00:02:01.901 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:02.173 [Pipeline] sh 00:02:02.452 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.725 [Pipeline] timeout 00:02:02.725 Timeout set to expire in 1 hr 30 min 00:02:02.727 [Pipeline] { 00:02:02.742 [Pipeline] sh 00:02:03.022 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.589 HEAD is now at 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:02:03.603 [Pipeline] sh 00:02:03.897 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:04.174 [Pipeline] sh 00:02:04.447 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:04.720 [Pipeline] sh 00:02:04.998 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:02:05.257 ++ readlink -f spdk_repo 00:02:05.257 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:05.257 + [[ -n /home/vagrant/spdk_repo ]] 00:02:05.257 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:05.257 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:05.257 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:05.257 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:05.257 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:05.257 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:02:05.257 + cd /home/vagrant/spdk_repo 00:02:05.257 + source /etc/os-release 00:02:05.257 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:05.257 ++ NAME=Ubuntu 00:02:05.257 ++ VERSION_ID=22.04 00:02:05.257 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:05.257 ++ VERSION_CODENAME=jammy 00:02:05.257 ++ ID=ubuntu 00:02:05.257 ++ ID_LIKE=debian 00:02:05.257 ++ HOME_URL=https://www.ubuntu.com/ 00:02:05.257 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:05.257 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:05.257 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:05.257 ++ UBUNTU_CODENAME=jammy 00:02:05.257 + uname -a 00:02:05.257 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:05.257 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:05.257 Hugepages 00:02:05.257 node hugesize free / total 00:02:05.257 node0 1048576kB 0 / 0 00:02:05.257 node0 2048kB 0 / 0 00:02:05.257 00:02:05.257 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.257 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:05.515 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:05.515 + rm -f /tmp/spdk-ld-path 00:02:05.515 + source autorun-spdk.conf 00:02:05.515 ++ SPDK_TEST_UNITTEST=1 00:02:05.515 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.515 ++ SPDK_TEST_NVME=1 00:02:05.515 ++ SPDK_TEST_BLOCKDEV=1 00:02:05.515 ++ SPDK_RUN_ASAN=1 00:02:05.515 ++ SPDK_RUN_UBSAN=1 00:02:05.515 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.515 ++ RUN_NIGHTLY=1 00:02:05.515 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.515 + [[ -n '' ]] 00:02:05.515 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.515 + for M in /var/spdk/build-*-manifest.txt 00:02:05.515 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.515 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.515 + for M in /var/spdk/build-*-manifest.txt 00:02:05.515 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.515 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.515 ++ uname 00:02:05.515 + [[ Linux == \L\i\n\u\x ]] 00:02:05.515 + sudo dmesg -T 00:02:05.515 + sudo dmesg --clear 00:02:05.515 + dmesg_pid=2082 00:02:05.515 + [[ Ubuntu == FreeBSD ]] 00:02:05.515 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.515 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.515 + sudo dmesg -Tw 00:02:05.515 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.515 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.515 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.515 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.515 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.515 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:05.515 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:05.515 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:05.515 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.515 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.515 Test configuration: 00:02:05.515 SPDK_TEST_UNITTEST=1 00:02:05.515 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.515 SPDK_TEST_NVME=1 00:02:05.515 SPDK_TEST_BLOCKDEV=1 00:02:05.515 SPDK_RUN_ASAN=1 00:02:05.515 SPDK_RUN_UBSAN=1 00:02:05.515 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.515 RUN_NIGHTLY=1 20:44:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.515 20:44:32 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.515 20:44:32 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.515 20:44:32 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.515 20:44:32 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:05.515 20:44:32 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:05.515 20:44:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:05.515 20:44:32 -- paths/export.sh@5 -- $ export PATH 00:02:05.515 20:44:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:05.515 20:44:32 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.515 20:44:32 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:05.515 20:44:32 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717965872.XXXXXX 00:02:05.515 20:44:32 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717965872.Z0reTH 00:02:05.515 20:44:32 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:05.515 20:44:32 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:05.515 20:44:32 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:05.515 20:44:32 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.515 20:44:32 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.515 20:44:32 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:05.515 20:44:32 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:05.515 20:44:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.516 20:44:32 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage' 00:02:05.516 20:44:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.516 20:44:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.516 20:44:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.516 20:44:32 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.516 Sun Jun 9 20:44:32 UTC 2024 00:02:05.516 20:44:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.516 LTS-43-g130b9406a 00:02:05.516 20:44:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:05.516 20:44:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:05.516 20:44:33 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:05.516 20:44:33 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:05.516 20:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.516 ************************************ 00:02:05.516 START TEST asan 00:02:05.516 ************************************ 00:02:05.516 using asan 00:02:05.516 20:44:33 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:05.516 00:02:05.516 real 0m0.000s 00:02:05.516 user 0m0.000s 00:02:05.516 sys 0m0.000s 00:02:05.516 20:44:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:05.516 ************************************ 00:02:05.516 END TEST asan 00:02:05.516 ************************************ 00:02:05.516 20:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.516 20:44:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:05.516 20:44:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:05.516 20:44:33 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:05.516 20:44:33 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:05.516 20:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.773 ************************************ 00:02:05.773 START TEST ubsan 00:02:05.774 ************************************ 00:02:05.774 using ubsan 00:02:05.774 20:44:33 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:05.774 00:02:05.774 real 0m0.000s 00:02:05.774 user 0m0.000s 00:02:05.774 sys 0m0.000s 00:02:05.774 20:44:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:05.774 ************************************ 00:02:05.774 END TEST ubsan 00:02:05.774 ************************************ 00:02:05.774 20:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.774 20:44:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:05.774 20:44:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:05.774 20:44:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:05.774 20:44:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:05.774 20:44:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:05.774 20:44:33 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:05.774 20:44:33 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:05.774 20:44:33 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:02:05.774 20:44:33 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:05.774 20:44:33 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:05.774 20:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.774 ************************************ 00:02:05.774 START TEST unittest_build 00:02:05.774 ************************************ 00:02:05.774 20:44:33 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:02:05.774 20:44:33 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --without-shared 00:02:05.774 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:05.774 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.032 Using 'verbs' RDMA provider 00:02:18.799 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:33.709 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:33.709 Creating mk/config.mk...done. 00:02:33.709 Creating mk/cc.flags.mk...done. 00:02:33.709 Type 'make' to build. 00:02:33.709 20:44:59 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:33.709 make[1]: Nothing to be done for 'all'. 00:02:48.582 The Meson build system 00:02:48.582 Version: 1.4.0 00:02:48.582 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:48.582 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:48.582 Build type: native build 00:02:48.582 Program cat found: YES (/usr/bin/cat) 00:02:48.582 Project name: DPDK 00:02:48.582 Project version: 23.11.0 00:02:48.582 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:48.582 C linker for the host machine: cc ld.bfd 2.38 00:02:48.582 Host machine cpu family: x86_64 00:02:48.582 Host machine cpu: x86_64 00:02:48.582 Message: ## Building in Developer Mode ## 00:02:48.582 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.582 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.582 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.582 Program python3 found: YES (/usr/bin/python3) 00:02:48.582 Program cat found: YES (/usr/bin/cat) 00:02:48.582 Compiler for C supports arguments -march=native: YES 00:02:48.582 Checking for size of "void *" : 8 00:02:48.582 Checking for size of "void *" : 8 (cached) 00:02:48.582 Library m found: YES 00:02:48.582 Library numa found: YES 00:02:48.582 Has header "numaif.h" : YES 00:02:48.582 Library fdt found: NO 00:02:48.582 Library execinfo found: NO 00:02:48.582 Has header "execinfo.h" : YES 00:02:48.582 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:48.582 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.582 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.582 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.582 Run-time dependency openssl found: YES 3.0.2 00:02:48.582 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:48.582 Library pcap found: NO 00:02:48.582 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.582 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.582 Compiler for C supports arguments -Wformat: YES 00:02:48.582 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:48.582 Compiler for C supports arguments -Wformat-security: YES 00:02:48.582 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.582 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.582 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.582 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.582 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.582 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.582 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.582 Compiler for C supports arguments -Wundef: YES 00:02:48.582 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.583 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.583 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.583 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.583 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.583 Program objdump found: YES (/usr/bin/objdump) 00:02:48.583 Compiler for C supports arguments -mavx512f: YES 00:02:48.583 Checking if "AVX512 checking" compiles: YES 00:02:48.583 Fetching value of define "__SSE4_2__" : 1 00:02:48.583 Fetching value of define "__AES__" : 1 00:02:48.583 Fetching value of define "__AVX__" : 1 00:02:48.583 Fetching value of define "__AVX2__" : 1 00:02:48.583 Fetching value of define "__AVX512BW__" : (undefined) 00:02:48.583 Fetching value of define "__AVX512CD__" : (undefined) 00:02:48.583 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:48.583 Fetching value of define "__AVX512F__" : (undefined) 00:02:48.583 Fetching value of define "__AVX512VL__" : (undefined) 00:02:48.583 Fetching value of define "__PCLMUL__" : 1 00:02:48.583 Fetching value of define "__RDRND__" : 1 00:02:48.583 Fetching value of define "__RDSEED__" : 1 00:02:48.583 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:48.583 Fetching value of define "__znver1__" : (undefined) 00:02:48.583 Fetching value of define "__znver2__" : (undefined) 00:02:48.583 Fetching value of define "__znver3__" : (undefined) 00:02:48.583 Fetching value of define "__znver4__" : (undefined) 00:02:48.583 Library asan found: YES 00:02:48.583 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.583 Message: lib/log: Defining dependency "log" 00:02:48.583 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.583 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.583 Library rt found: YES 00:02:48.583 Checking for function "getentropy" : NO 00:02:48.583 Message: lib/eal: Defining dependency "eal" 00:02:48.583 Message: lib/ring: Defining dependency "ring" 00:02:48.583 Message: lib/rcu: Defining dependency "rcu" 00:02:48.583 Message: lib/mempool: Defining dependency "mempool" 00:02:48.583 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.583 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.583 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:48.583 Compiler for C supports arguments -mpclmul: YES 00:02:48.583 Compiler for C supports arguments -maes: YES 00:02:48.583 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.583 Compiler for C supports arguments -mavx512bw: YES 00:02:48.583 Compiler for C supports arguments -mavx512dq: YES 00:02:48.583 Compiler for C supports arguments -mavx512vl: YES 00:02:48.583 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.583 Compiler for C supports arguments -mavx2: YES 00:02:48.583 Compiler for C supports arguments -mavx: YES 00:02:48.583 Message: lib/net: Defining dependency "net" 00:02:48.583 Message: lib/meter: Defining dependency "meter" 00:02:48.583 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.583 Message: lib/pci: Defining dependency "pci" 00:02:48.583 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.583 Message: lib/hash: Defining dependency "hash" 00:02:48.583 Message: lib/timer: Defining dependency "timer" 00:02:48.583 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.583 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.583 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.583 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.583 Message: lib/power: Defining dependency "power" 00:02:48.583 Message: lib/reorder: Defining dependency "reorder" 00:02:48.583 Message: lib/security: Defining dependency "security" 00:02:48.583 Has header "linux/userfaultfd.h" : YES 00:02:48.583 Has header "linux/vduse.h" : YES 00:02:48.583 Message: lib/vhost: Defining dependency "vhost" 00:02:48.583 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.583 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.583 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.583 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.583 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.583 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.583 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.583 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.583 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.583 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.583 Program doxygen found: YES (/usr/bin/doxygen) 00:02:48.583 Configuring doxy-api-html.conf using configuration 00:02:48.583 Configuring doxy-api-man.conf using configuration 00:02:48.583 Program mandb found: YES (/usr/bin/mandb) 00:02:48.583 Program sphinx-build found: NO 00:02:48.583 Configuring rte_build_config.h using configuration 00:02:48.583 Message: 00:02:48.583 ================= 00:02:48.583 Applications Enabled 00:02:48.583 ================= 00:02:48.583 00:02:48.583 apps: 00:02:48.583 00:02:48.583 00:02:48.583 Message: 00:02:48.583 ================= 00:02:48.583 Libraries Enabled 00:02:48.583 ================= 00:02:48.583 00:02:48.583 libs: 00:02:48.583 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.583 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.583 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.583 00:02:48.583 Message: 00:02:48.583 =============== 00:02:48.583 Drivers Enabled 00:02:48.583 =============== 00:02:48.583 00:02:48.583 common: 00:02:48.583 00:02:48.583 bus: 00:02:48.583 pci, vdev, 00:02:48.583 mempool: 00:02:48.583 ring, 00:02:48.583 dma: 00:02:48.583 00:02:48.583 net: 00:02:48.583 00:02:48.583 crypto: 00:02:48.583 00:02:48.583 compress: 00:02:48.583 00:02:48.583 vdpa: 00:02:48.583 00:02:48.583 00:02:48.583 Message: 00:02:48.583 ================= 00:02:48.583 Content Skipped 00:02:48.583 ================= 00:02:48.583 00:02:48.583 apps: 00:02:48.583 dumpcap: explicitly disabled via build config 00:02:48.583 graph: explicitly disabled via build config 00:02:48.583 pdump: explicitly disabled via build config 00:02:48.583 proc-info: explicitly disabled via build config 00:02:48.583 test-acl: explicitly disabled via build config 00:02:48.583 test-bbdev: explicitly disabled via build config 00:02:48.583 test-cmdline: explicitly disabled via build config 00:02:48.583 test-compress-perf: explicitly disabled via build config 00:02:48.583 test-crypto-perf: explicitly disabled via build config 00:02:48.583 test-dma-perf: explicitly disabled via build config 00:02:48.583 test-eventdev: explicitly disabled via build config 00:02:48.583 test-fib: explicitly disabled via build config 00:02:48.583 test-flow-perf: explicitly disabled via build config 00:02:48.583 test-gpudev: explicitly disabled via build config 00:02:48.583 test-mldev: explicitly disabled via build config 00:02:48.583 test-pipeline: explicitly disabled via build config 00:02:48.583 test-pmd: explicitly disabled via build config 00:02:48.583 test-regex: explicitly disabled via build config 00:02:48.583 test-sad: explicitly disabled via build config 00:02:48.583 test-security-perf: explicitly disabled via build config 00:02:48.583 00:02:48.583 libs: 00:02:48.583 metrics: explicitly disabled via build config 00:02:48.583 acl: explicitly disabled via build config 00:02:48.583 bbdev: explicitly disabled via build config 00:02:48.583 bitratestats: explicitly disabled via build config 00:02:48.583 bpf: explicitly disabled via build config 00:02:48.583 cfgfile: explicitly disabled via build config 00:02:48.583 distributor: explicitly disabled via build config 00:02:48.583 efd: explicitly disabled via build config 00:02:48.583 eventdev: explicitly disabled via build config 00:02:48.583 dispatcher: explicitly disabled via build config 00:02:48.583 gpudev: explicitly disabled via build config 00:02:48.583 gro: explicitly disabled via build config 00:02:48.583 gso: explicitly disabled via build config 00:02:48.583 ip_frag: explicitly disabled via build config 00:02:48.583 jobstats: explicitly disabled via build config 00:02:48.583 latencystats: explicitly disabled via build config 00:02:48.583 lpm: explicitly disabled via build config 00:02:48.583 member: explicitly disabled via build config 00:02:48.583 pcapng: explicitly disabled via build config 00:02:48.583 rawdev: explicitly disabled via build config 00:02:48.583 regexdev: explicitly disabled via build config 00:02:48.583 mldev: explicitly disabled via build config 00:02:48.583 rib: explicitly disabled via build config 00:02:48.583 sched: explicitly disabled via build config 00:02:48.583 stack: explicitly disabled via build config 00:02:48.583 ipsec: explicitly disabled via build config 00:02:48.583 pdcp: explicitly disabled via build config 00:02:48.583 fib: explicitly disabled via build config 00:02:48.583 port: explicitly disabled via build config 00:02:48.583 pdump: explicitly disabled via build config 00:02:48.583 table: explicitly disabled via build config 00:02:48.583 pipeline: explicitly disabled via build config 00:02:48.583 graph: explicitly disabled via build config 00:02:48.583 node: explicitly disabled via build config 00:02:48.583 00:02:48.583 drivers: 00:02:48.583 common/cpt: not in enabled drivers build config 00:02:48.583 common/dpaax: not in enabled drivers build config 00:02:48.583 common/iavf: not in enabled drivers build config 00:02:48.583 common/idpf: not in enabled drivers build config 00:02:48.583 common/mvep: not in enabled drivers build config 00:02:48.583 common/octeontx: not in enabled drivers build config 00:02:48.583 bus/auxiliary: not in enabled drivers build config 00:02:48.583 bus/cdx: not in enabled drivers build config 00:02:48.583 bus/dpaa: not in enabled drivers build config 00:02:48.583 bus/fslmc: not in enabled drivers build config 00:02:48.583 bus/ifpga: not in enabled drivers build config 00:02:48.583 bus/platform: not in enabled drivers build config 00:02:48.583 bus/vmbus: not in enabled drivers build config 00:02:48.583 common/cnxk: not in enabled drivers build config 00:02:48.584 common/mlx5: not in enabled drivers build config 00:02:48.584 common/nfp: not in enabled drivers build config 00:02:48.584 common/qat: not in enabled drivers build config 00:02:48.584 common/sfc_efx: not in enabled drivers build config 00:02:48.584 mempool/bucket: not in enabled drivers build config 00:02:48.584 mempool/cnxk: not in enabled drivers build config 00:02:48.584 mempool/dpaa: not in enabled drivers build config 00:02:48.584 mempool/dpaa2: not in enabled drivers build config 00:02:48.584 mempool/octeontx: not in enabled drivers build config 00:02:48.584 mempool/stack: not in enabled drivers build config 00:02:48.584 dma/cnxk: not in enabled drivers build config 00:02:48.584 dma/dpaa: not in enabled drivers build config 00:02:48.584 dma/dpaa2: not in enabled drivers build config 00:02:48.584 dma/hisilicon: not in enabled drivers build config 00:02:48.584 dma/idxd: not in enabled drivers build config 00:02:48.584 dma/ioat: not in enabled drivers build config 00:02:48.584 dma/skeleton: not in enabled drivers build config 00:02:48.584 net/af_packet: not in enabled drivers build config 00:02:48.584 net/af_xdp: not in enabled drivers build config 00:02:48.584 net/ark: not in enabled drivers build config 00:02:48.584 net/atlantic: not in enabled drivers build config 00:02:48.584 net/avp: not in enabled drivers build config 00:02:48.584 net/axgbe: not in enabled drivers build config 00:02:48.584 net/bnx2x: not in enabled drivers build config 00:02:48.584 net/bnxt: not in enabled drivers build config 00:02:48.584 net/bonding: not in enabled drivers build config 00:02:48.584 net/cnxk: not in enabled drivers build config 00:02:48.584 net/cpfl: not in enabled drivers build config 00:02:48.584 net/cxgbe: not in enabled drivers build config 00:02:48.584 net/dpaa: not in enabled drivers build config 00:02:48.584 net/dpaa2: not in enabled drivers build config 00:02:48.584 net/e1000: not in enabled drivers build config 00:02:48.584 net/ena: not in enabled drivers build config 00:02:48.584 net/enetc: not in enabled drivers build config 00:02:48.584 net/enetfec: not in enabled drivers build config 00:02:48.584 net/enic: not in enabled drivers build config 00:02:48.584 net/failsafe: not in enabled drivers build config 00:02:48.584 net/fm10k: not in enabled drivers build config 00:02:48.584 net/gve: not in enabled drivers build config 00:02:48.584 net/hinic: not in enabled drivers build config 00:02:48.584 net/hns3: not in enabled drivers build config 00:02:48.584 net/i40e: not in enabled drivers build config 00:02:48.584 net/iavf: not in enabled drivers build config 00:02:48.584 net/ice: not in enabled drivers build config 00:02:48.584 net/idpf: not in enabled drivers build config 00:02:48.584 net/igc: not in enabled drivers build config 00:02:48.584 net/ionic: not in enabled drivers build config 00:02:48.584 net/ipn3ke: not in enabled drivers build config 00:02:48.584 net/ixgbe: not in enabled drivers build config 00:02:48.584 net/mana: not in enabled drivers build config 00:02:48.584 net/memif: not in enabled drivers build config 00:02:48.584 net/mlx4: not in enabled drivers build config 00:02:48.584 net/mlx5: not in enabled drivers build config 00:02:48.584 net/mvneta: not in enabled drivers build config 00:02:48.584 net/mvpp2: not in enabled drivers build config 00:02:48.584 net/netvsc: not in enabled drivers build config 00:02:48.584 net/nfb: not in enabled drivers build config 00:02:48.584 net/nfp: not in enabled drivers build config 00:02:48.584 net/ngbe: not in enabled drivers build config 00:02:48.584 net/null: not in enabled drivers build config 00:02:48.584 net/octeontx: not in enabled drivers build config 00:02:48.584 net/octeon_ep: not in enabled drivers build config 00:02:48.584 net/pcap: not in enabled drivers build config 00:02:48.584 net/pfe: not in enabled drivers build config 00:02:48.584 net/qede: not in enabled drivers build config 00:02:48.584 net/ring: not in enabled drivers build config 00:02:48.584 net/sfc: not in enabled drivers build config 00:02:48.584 net/softnic: not in enabled drivers build config 00:02:48.584 net/tap: not in enabled drivers build config 00:02:48.584 net/thunderx: not in enabled drivers build config 00:02:48.584 net/txgbe: not in enabled drivers build config 00:02:48.584 net/vdev_netvsc: not in enabled drivers build config 00:02:48.584 net/vhost: not in enabled drivers build config 00:02:48.584 net/virtio: not in enabled drivers build config 00:02:48.584 net/vmxnet3: not in enabled drivers build config 00:02:48.584 raw/*: missing internal dependency, "rawdev" 00:02:48.584 crypto/armv8: not in enabled drivers build config 00:02:48.584 crypto/bcmfs: not in enabled drivers build config 00:02:48.584 crypto/caam_jr: not in enabled drivers build config 00:02:48.584 crypto/ccp: not in enabled drivers build config 00:02:48.584 crypto/cnxk: not in enabled drivers build config 00:02:48.584 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.584 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.584 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.584 crypto/mlx5: not in enabled drivers build config 00:02:48.584 crypto/mvsam: not in enabled drivers build config 00:02:48.584 crypto/nitrox: not in enabled drivers build config 00:02:48.584 crypto/null: not in enabled drivers build config 00:02:48.584 crypto/octeontx: not in enabled drivers build config 00:02:48.584 crypto/openssl: not in enabled drivers build config 00:02:48.584 crypto/scheduler: not in enabled drivers build config 00:02:48.584 crypto/uadk: not in enabled drivers build config 00:02:48.584 crypto/virtio: not in enabled drivers build config 00:02:48.584 compress/isal: not in enabled drivers build config 00:02:48.584 compress/mlx5: not in enabled drivers build config 00:02:48.584 compress/octeontx: not in enabled drivers build config 00:02:48.584 compress/zlib: not in enabled drivers build config 00:02:48.584 regex/*: missing internal dependency, "regexdev" 00:02:48.584 ml/*: missing internal dependency, "mldev" 00:02:48.584 vdpa/ifc: not in enabled drivers build config 00:02:48.584 vdpa/mlx5: not in enabled drivers build config 00:02:48.584 vdpa/nfp: not in enabled drivers build config 00:02:48.584 vdpa/sfc: not in enabled drivers build config 00:02:48.584 event/*: missing internal dependency, "eventdev" 00:02:48.584 baseband/*: missing internal dependency, "bbdev" 00:02:48.584 gpu/*: missing internal dependency, "gpudev" 00:02:48.584 00:02:48.584 00:02:48.584 Build targets in project: 85 00:02:48.584 00:02:48.584 DPDK 23.11.0 00:02:48.584 00:02:48.584 User defined options 00:02:48.584 buildtype : debug 00:02:48.584 default_library : static 00:02:48.584 libdir : lib 00:02:48.584 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:48.584 b_sanitize : address 00:02:48.584 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:02:48.584 c_link_args : 00:02:48.584 cpu_instruction_set: native 00:02:48.584 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:48.584 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:48.584 enable_docs : false 00:02:48.584 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:48.584 enable_kmods : false 00:02:48.584 tests : false 00:02:48.584 00:02:48.584 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.584 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:48.584 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:48.584 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.584 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:48.584 [4/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:48.584 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:48.584 [6/265] Linking static target lib/librte_kvargs.a 00:02:48.584 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.584 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:48.584 [9/265] Linking static target lib/librte_log.a 00:02:48.584 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.584 [11/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.584 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.584 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.584 [14/265] Linking static target lib/librte_telemetry.a 00:02:48.584 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.584 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.584 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:48.584 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:48.584 [19/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.584 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:48.584 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:48.584 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:48.584 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:48.584 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:48.584 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:48.584 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:48.584 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:48.584 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:48.584 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:48.584 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:48.584 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:48.584 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:48.584 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:48.584 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:48.584 [35/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.585 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:48.585 [37/265] Linking target lib/librte_log.so.24.0 00:02:48.585 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:48.585 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:48.585 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:48.585 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:48.585 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:48.585 [43/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:48.585 [44/265] Linking target lib/librte_kvargs.so.24.0 00:02:48.585 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:48.585 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:48.585 [47/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.585 [48/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:48.585 [49/265] Linking target lib/librte_telemetry.so.24.0 00:02:48.585 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.842 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:48.842 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:48.842 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.842 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.842 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.842 [56/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:48.842 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.842 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.842 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.842 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.842 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.842 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:49.100 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:49.100 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.100 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:49.100 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:49.100 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.100 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.357 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.357 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:49.357 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:49.357 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:49.357 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:49.357 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:49.357 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:49.357 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:49.357 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:49.357 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.357 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.615 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.615 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.615 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.615 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.615 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.615 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.615 [86/265] Linking static target lib/librte_ring.a 00:02:49.872 [87/265] Linking static target lib/librte_eal.a 00:02:49.872 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.872 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.872 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.872 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.872 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.872 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.872 [94/265] Linking static target lib/librte_mempool.a 00:02:50.130 [95/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:50.130 [96/265] Linking static target lib/librte_rcu.a 00:02:50.130 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:50.130 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.130 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.388 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.388 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.388 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.388 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.389 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.389 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:50.646 [106/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.646 [107/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.646 [108/265] Linking static target lib/librte_net.a 00:02:50.646 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.646 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.646 [111/265] Linking static target lib/librte_mbuf.a 00:02:50.646 [112/265] Linking static target lib/librte_meter.a 00:02:50.646 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.646 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.904 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.904 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.904 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.904 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.161 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.161 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.161 [121/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.161 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.418 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.418 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.418 [125/265] Linking static target lib/librte_pci.a 00:02:51.418 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.418 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.674 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.674 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.675 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.675 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.675 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.675 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.675 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.675 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.675 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.675 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.675 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.675 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.675 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.675 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.675 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.931 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.931 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.931 [145/265] Linking static target lib/librte_cmdline.a 00:02:52.188 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.188 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.188 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:52.188 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.188 [150/265] Linking static target lib/librte_timer.a 00:02:52.188 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.188 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.446 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.446 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.704 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.704 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.704 [157/265] Linking static target lib/librte_compressdev.a 00:02:52.704 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.704 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.704 [160/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.704 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.704 [162/265] Linking static target lib/librte_hash.a 00:02:52.704 [163/265] Linking static target lib/librte_ethdev.a 00:02:52.704 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.962 [165/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.962 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.962 [167/265] Linking static target lib/librte_dmadev.a 00:02:52.962 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.962 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.962 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.962 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.220 [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.220 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.220 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.478 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.478 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.478 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.478 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.478 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.736 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.736 [181/265] Linking static target lib/librte_cryptodev.a 00:02:53.736 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.736 [183/265] Linking static target lib/librte_power.a 00:02:53.736 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.736 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.736 [186/265] Linking static target lib/librte_reorder.a 00:02:53.736 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.994 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.994 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.994 [190/265] Linking static target lib/librte_security.a 00:02:53.994 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.252 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.252 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.509 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.509 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.509 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.766 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.766 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.766 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.766 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.766 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.766 [202/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.054 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.055 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:55.055 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:55.055 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:55.055 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:55.055 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:55.312 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.312 [210/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.312 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.312 [212/265] Linking static target drivers/librte_bus_vdev.a 00:02:55.312 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.312 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.312 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.312 [216/265] Linking static target drivers/librte_bus_pci.a 00:02:55.570 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.570 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.570 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.570 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.570 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.570 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.570 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:55.827 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.201 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.202 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.202 [227/265] Linking target lib/librte_eal.so.24.0 00:02:57.202 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:57.202 [229/265] Linking target lib/librte_dmadev.so.24.0 00:02:57.202 [230/265] Linking target lib/librte_meter.so.24.0 00:02:57.202 [231/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:57.202 [232/265] Linking target lib/librte_pci.so.24.0 00:02:57.202 [233/265] Linking target lib/librte_ring.so.24.0 00:02:57.202 [234/265] Linking target lib/librte_timer.so.24.0 00:02:57.459 [235/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:57.459 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:57.459 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:57.459 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:57.459 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:57.459 [240/265] Linking target lib/librte_rcu.so.24.0 00:02:57.459 [241/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:57.459 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:57.717 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:57.717 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:57.717 [245/265] Linking target lib/librte_mbuf.so.24.0 00:02:57.717 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:57.717 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:57.975 [248/265] Linking target lib/librte_net.so.24.0 00:02:57.975 [249/265] Linking target lib/librte_reorder.so.24.0 00:02:57.975 [250/265] Linking target lib/librte_compressdev.so.24.0 00:02:57.975 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:57.975 [252/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.975 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:57.975 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:57.975 [255/265] Linking target lib/librte_hash.so.24.0 00:02:57.975 [256/265] Linking target lib/librte_cmdline.so.24.0 00:02:57.975 [257/265] Linking target lib/librte_security.so.24.0 00:02:57.975 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:58.232 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:58.232 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:58.232 [261/265] Linking target lib/librte_power.so.24.0 00:03:00.132 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.132 [263/265] Linking static target lib/librte_vhost.a 00:03:02.032 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.032 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:02.032 INFO: autodetecting backend as ninja 00:03:02.032 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:02.966 CC lib/ut/ut.o 00:03:02.966 CC lib/ut_mock/mock.o 00:03:02.966 CC lib/log/log_flags.o 00:03:02.966 CC lib/log/log.o 00:03:02.966 CC lib/log/log_deprecated.o 00:03:03.233 LIB libspdk_ut_mock.a 00:03:03.233 LIB libspdk_log.a 00:03:03.233 LIB libspdk_ut.a 00:03:03.233 CXX lib/trace_parser/trace.o 00:03:03.233 CC lib/dma/dma.o 00:03:03.233 CC lib/util/base64.o 00:03:03.233 CC lib/util/bit_array.o 00:03:03.233 CC lib/util/cpuset.o 00:03:03.233 CC lib/util/crc16.o 00:03:03.233 CC lib/ioat/ioat.o 00:03:03.233 CC lib/util/crc32.o 00:03:03.233 CC lib/util/crc32c.o 00:03:03.505 CC lib/vfio_user/host/vfio_user_pci.o 00:03:03.505 CC lib/util/crc32_ieee.o 00:03:03.505 CC lib/util/crc64.o 00:03:03.505 CC lib/vfio_user/host/vfio_user.o 00:03:03.505 CC lib/util/dif.o 00:03:03.505 CC lib/util/fd.o 00:03:03.505 LIB libspdk_dma.a 00:03:03.505 CC lib/util/file.o 00:03:03.505 CC lib/util/hexlify.o 00:03:03.505 CC lib/util/iov.o 00:03:03.505 CC lib/util/math.o 00:03:03.763 LIB libspdk_ioat.a 00:03:03.763 CC lib/util/pipe.o 00:03:03.763 CC lib/util/strerror_tls.o 00:03:03.763 CC lib/util/uuid.o 00:03:03.763 CC lib/util/string.o 00:03:03.763 LIB libspdk_vfio_user.a 00:03:03.763 CC lib/util/fd_group.o 00:03:03.763 CC lib/util/xor.o 00:03:03.763 CC lib/util/zipf.o 00:03:04.021 LIB libspdk_util.a 00:03:04.279 CC lib/vmd/vmd.o 00:03:04.279 CC lib/conf/conf.o 00:03:04.279 CC lib/rdma/rdma_verbs.o 00:03:04.279 CC lib/vmd/led.o 00:03:04.279 CC lib/idxd/idxd.o 00:03:04.279 CC lib/rdma/common.o 00:03:04.279 CC lib/idxd/idxd_user.o 00:03:04.279 CC lib/env_dpdk/env.o 00:03:04.279 CC lib/json/json_parse.o 00:03:04.279 LIB libspdk_trace_parser.a 00:03:04.537 CC lib/json/json_util.o 00:03:04.537 CC lib/json/json_write.o 00:03:04.537 CC lib/env_dpdk/memory.o 00:03:04.537 LIB libspdk_conf.a 00:03:04.537 CC lib/env_dpdk/pci.o 00:03:04.537 CC lib/env_dpdk/init.o 00:03:04.537 CC lib/env_dpdk/threads.o 00:03:04.537 LIB libspdk_rdma.a 00:03:04.537 CC lib/env_dpdk/pci_ioat.o 00:03:04.795 CC lib/env_dpdk/pci_virtio.o 00:03:04.795 CC lib/env_dpdk/pci_vmd.o 00:03:04.795 LIB libspdk_json.a 00:03:04.795 CC lib/env_dpdk/pci_idxd.o 00:03:04.795 CC lib/env_dpdk/pci_event.o 00:03:04.795 CC lib/env_dpdk/sigbus_handler.o 00:03:04.795 CC lib/env_dpdk/pci_dpdk.o 00:03:04.795 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:05.053 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:05.053 LIB libspdk_idxd.a 00:03:05.053 CC lib/jsonrpc/jsonrpc_server.o 00:03:05.053 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:05.053 CC lib/jsonrpc/jsonrpc_client.o 00:03:05.053 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:05.053 LIB libspdk_vmd.a 00:03:05.310 LIB libspdk_jsonrpc.a 00:03:05.568 CC lib/rpc/rpc.o 00:03:05.568 LIB libspdk_rpc.a 00:03:05.825 CC lib/notify/notify.o 00:03:05.825 CC lib/notify/notify_rpc.o 00:03:05.825 CC lib/trace/trace.o 00:03:05.825 CC lib/trace/trace_flags.o 00:03:05.825 CC lib/trace/trace_rpc.o 00:03:05.826 CC lib/sock/sock.o 00:03:05.826 CC lib/sock/sock_rpc.o 00:03:05.826 LIB libspdk_env_dpdk.a 00:03:06.082 LIB libspdk_notify.a 00:03:06.082 LIB libspdk_trace.a 00:03:06.339 CC lib/thread/thread.o 00:03:06.339 CC lib/thread/iobuf.o 00:03:06.339 LIB libspdk_sock.a 00:03:06.339 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:06.339 CC lib/nvme/nvme_ctrlr.o 00:03:06.339 CC lib/nvme/nvme_fabric.o 00:03:06.339 CC lib/nvme/nvme_ns_cmd.o 00:03:06.339 CC lib/nvme/nvme_ns.o 00:03:06.339 CC lib/nvme/nvme_pcie_common.o 00:03:06.340 CC lib/nvme/nvme_pcie.o 00:03:06.340 CC lib/nvme/nvme_qpair.o 00:03:06.597 CC lib/nvme/nvme.o 00:03:06.854 CC lib/nvme/nvme_quirks.o 00:03:07.111 CC lib/nvme/nvme_transport.o 00:03:07.111 CC lib/nvme/nvme_discovery.o 00:03:07.111 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:07.111 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:07.111 CC lib/nvme/nvme_tcp.o 00:03:07.369 CC lib/nvme/nvme_opal.o 00:03:07.369 CC lib/nvme/nvme_io_msg.o 00:03:07.369 CC lib/nvme/nvme_poll_group.o 00:03:07.627 CC lib/nvme/nvme_zns.o 00:03:07.627 CC lib/nvme/nvme_cuse.o 00:03:07.627 CC lib/nvme/nvme_vfio_user.o 00:03:07.627 CC lib/nvme/nvme_rdma.o 00:03:07.884 LIB libspdk_thread.a 00:03:08.142 CC lib/accel/accel.o 00:03:08.142 CC lib/virtio/virtio.o 00:03:08.142 CC lib/init/json_config.o 00:03:08.142 CC lib/blob/blobstore.o 00:03:08.142 CC lib/blob/request.o 00:03:08.142 CC lib/blob/zeroes.o 00:03:08.142 CC lib/init/subsystem.o 00:03:08.399 CC lib/init/subsystem_rpc.o 00:03:08.399 CC lib/init/rpc.o 00:03:08.399 CC lib/virtio/virtio_vhost_user.o 00:03:08.399 CC lib/blob/blob_bs_dev.o 00:03:08.399 CC lib/accel/accel_rpc.o 00:03:08.399 CC lib/virtio/virtio_vfio_user.o 00:03:08.399 LIB libspdk_init.a 00:03:08.658 CC lib/accel/accel_sw.o 00:03:08.658 CC lib/virtio/virtio_pci.o 00:03:08.658 CC lib/event/reactor.o 00:03:08.658 CC lib/event/app.o 00:03:08.658 CC lib/event/log_rpc.o 00:03:08.658 CC lib/event/app_rpc.o 00:03:08.658 CC lib/event/scheduler_static.o 00:03:08.916 LIB libspdk_virtio.a 00:03:08.916 LIB libspdk_nvme.a 00:03:09.174 LIB libspdk_accel.a 00:03:09.174 LIB libspdk_event.a 00:03:09.432 CC lib/bdev/bdev.o 00:03:09.432 CC lib/bdev/bdev_rpc.o 00:03:09.432 CC lib/bdev/bdev_zone.o 00:03:09.432 CC lib/bdev/part.o 00:03:09.432 CC lib/bdev/scsi_nvme.o 00:03:11.369 LIB libspdk_blob.a 00:03:11.627 CC lib/blobfs/blobfs.o 00:03:11.627 CC lib/lvol/lvol.o 00:03:11.627 CC lib/blobfs/tree.o 00:03:12.194 LIB libspdk_bdev.a 00:03:12.452 CC lib/nvmf/ctrlr.o 00:03:12.452 CC lib/nvmf/ctrlr_discovery.o 00:03:12.452 CC lib/nvmf/ctrlr_bdev.o 00:03:12.452 CC lib/ftl/ftl_init.o 00:03:12.452 CC lib/ftl/ftl_core.o 00:03:12.452 CC lib/nvmf/subsystem.o 00:03:12.452 CC lib/nbd/nbd.o 00:03:12.452 CC lib/scsi/dev.o 00:03:12.452 CC lib/nvmf/nvmf.o 00:03:12.452 LIB libspdk_blobfs.a 00:03:12.711 CC lib/scsi/lun.o 00:03:12.711 LIB libspdk_lvol.a 00:03:12.711 CC lib/nvmf/nvmf_rpc.o 00:03:12.711 CC lib/ftl/ftl_layout.o 00:03:12.711 CC lib/ftl/ftl_debug.o 00:03:12.711 CC lib/nbd/nbd_rpc.o 00:03:12.969 CC lib/ftl/ftl_io.o 00:03:12.969 CC lib/scsi/port.o 00:03:12.969 CC lib/scsi/scsi.o 00:03:12.969 LIB libspdk_nbd.a 00:03:12.969 CC lib/scsi/scsi_bdev.o 00:03:12.969 CC lib/nvmf/transport.o 00:03:12.969 CC lib/nvmf/tcp.o 00:03:13.228 CC lib/nvmf/rdma.o 00:03:13.228 CC lib/scsi/scsi_pr.o 00:03:13.228 CC lib/ftl/ftl_sb.o 00:03:13.486 CC lib/ftl/ftl_l2p.o 00:03:13.486 CC lib/ftl/ftl_l2p_flat.o 00:03:13.486 CC lib/scsi/scsi_rpc.o 00:03:13.486 CC lib/scsi/task.o 00:03:13.486 CC lib/ftl/ftl_nv_cache.o 00:03:13.486 CC lib/ftl/ftl_band.o 00:03:13.745 CC lib/ftl/ftl_band_ops.o 00:03:13.745 CC lib/ftl/ftl_writer.o 00:03:13.745 CC lib/ftl/ftl_rq.o 00:03:13.745 LIB libspdk_scsi.a 00:03:13.745 CC lib/ftl/ftl_reloc.o 00:03:13.745 CC lib/ftl/ftl_l2p_cache.o 00:03:14.003 CC lib/ftl/ftl_p2l.o 00:03:14.003 CC lib/ftl/mngt/ftl_mngt.o 00:03:14.003 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:14.003 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:14.260 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:14.260 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:14.260 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:14.260 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:14.260 CC lib/iscsi/conn.o 00:03:14.260 CC lib/vhost/vhost.o 00:03:14.260 CC lib/vhost/vhost_rpc.o 00:03:14.517 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:14.517 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:14.517 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:14.517 CC lib/iscsi/init_grp.o 00:03:14.517 CC lib/iscsi/iscsi.o 00:03:14.775 CC lib/vhost/vhost_scsi.o 00:03:14.775 CC lib/vhost/vhost_blk.o 00:03:14.775 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:14.775 CC lib/iscsi/md5.o 00:03:14.775 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:14.775 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.033 CC lib/ftl/utils/ftl_conf.o 00:03:15.033 CC lib/ftl/utils/ftl_md.o 00:03:15.033 CC lib/ftl/utils/ftl_mempool.o 00:03:15.033 CC lib/iscsi/param.o 00:03:15.033 CC lib/iscsi/portal_grp.o 00:03:15.291 CC lib/iscsi/tgt_node.o 00:03:15.291 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.291 CC lib/vhost/rte_vhost_user.o 00:03:15.291 CC lib/ftl/utils/ftl_property.o 00:03:15.291 CC lib/iscsi/iscsi_subsystem.o 00:03:15.549 CC lib/iscsi/iscsi_rpc.o 00:03:15.549 CC lib/iscsi/task.o 00:03:15.549 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.549 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.549 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.549 LIB libspdk_nvmf.a 00:03:15.549 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.549 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.807 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.807 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.807 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.807 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.807 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.807 CC lib/ftl/base/ftl_base_dev.o 00:03:15.807 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.807 CC lib/ftl/ftl_trace.o 00:03:16.065 LIB libspdk_ftl.a 00:03:16.065 LIB libspdk_iscsi.a 00:03:16.323 LIB libspdk_vhost.a 00:03:16.583 CC module/env_dpdk/env_dpdk_rpc.o 00:03:16.583 CC module/sock/posix/posix.o 00:03:16.583 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:16.583 CC module/accel/ioat/accel_ioat.o 00:03:16.583 CC module/accel/error/accel_error.o 00:03:16.583 CC module/blob/bdev/blob_bdev.o 00:03:16.583 CC module/accel/dsa/accel_dsa.o 00:03:16.583 CC module/scheduler/gscheduler/gscheduler.o 00:03:16.583 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:16.583 CC module/accel/iaa/accel_iaa.o 00:03:16.583 LIB libspdk_env_dpdk_rpc.a 00:03:16.583 CC module/accel/iaa/accel_iaa_rpc.o 00:03:16.583 LIB libspdk_scheduler_gscheduler.a 00:03:16.841 CC module/accel/error/accel_error_rpc.o 00:03:16.841 CC module/accel/ioat/accel_ioat_rpc.o 00:03:16.841 CC module/accel/dsa/accel_dsa_rpc.o 00:03:16.841 LIB libspdk_scheduler_dynamic.a 00:03:16.841 LIB libspdk_scheduler_dpdk_governor.a 00:03:16.841 LIB libspdk_accel_iaa.a 00:03:16.841 LIB libspdk_blob_bdev.a 00:03:16.841 LIB libspdk_accel_ioat.a 00:03:16.841 LIB libspdk_accel_error.a 00:03:16.841 LIB libspdk_accel_dsa.a 00:03:17.099 CC module/blobfs/bdev/blobfs_bdev.o 00:03:17.099 CC module/bdev/gpt/gpt.o 00:03:17.099 CC module/bdev/error/vbdev_error.o 00:03:17.099 CC module/bdev/delay/vbdev_delay.o 00:03:17.099 CC module/bdev/lvol/vbdev_lvol.o 00:03:17.099 CC module/bdev/malloc/bdev_malloc.o 00:03:17.099 CC module/bdev/passthru/vbdev_passthru.o 00:03:17.099 CC module/bdev/null/bdev_null.o 00:03:17.099 CC module/bdev/nvme/bdev_nvme.o 00:03:17.099 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:17.099 CC module/bdev/gpt/vbdev_gpt.o 00:03:17.356 CC module/bdev/null/bdev_null_rpc.o 00:03:17.356 CC module/bdev/error/vbdev_error_rpc.o 00:03:17.356 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:17.356 LIB libspdk_blobfs_bdev.a 00:03:17.356 LIB libspdk_sock_posix.a 00:03:17.356 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:17.356 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:17.356 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:17.356 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:17.356 LIB libspdk_bdev_null.a 00:03:17.356 LIB libspdk_bdev_error.a 00:03:17.356 LIB libspdk_bdev_passthru.a 00:03:17.614 LIB libspdk_bdev_gpt.a 00:03:17.614 CC module/bdev/nvme/nvme_rpc.o 00:03:17.614 LIB libspdk_bdev_delay.a 00:03:17.614 CC module/bdev/split/vbdev_split.o 00:03:17.614 LIB libspdk_bdev_malloc.a 00:03:17.614 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.614 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.614 CC module/bdev/raid/bdev_raid.o 00:03:17.614 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.614 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.614 LIB libspdk_bdev_lvol.a 00:03:17.871 LIB libspdk_bdev_split.a 00:03:17.871 CC module/bdev/nvme/bdev_mdns_client.o 00:03:17.871 CC module/bdev/aio/bdev_aio.o 00:03:17.871 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.871 CC module/bdev/ftl/bdev_ftl.o 00:03:17.871 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.871 CC module/bdev/raid/raid0.o 00:03:17.871 CC module/bdev/raid/raid1.o 00:03:17.871 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.871 CC module/bdev/raid/concat.o 00:03:18.128 CC module/bdev/nvme/vbdev_opal.o 00:03:18.128 LIB libspdk_bdev_zone_block.a 00:03:18.128 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:18.128 LIB libspdk_bdev_aio.a 00:03:18.128 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:18.128 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:18.128 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.128 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.128 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.386 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.386 LIB libspdk_bdev_iscsi.a 00:03:18.386 LIB libspdk_bdev_ftl.a 00:03:18.644 LIB libspdk_bdev_raid.a 00:03:18.644 LIB libspdk_bdev_virtio.a 00:03:19.577 LIB libspdk_bdev_nvme.a 00:03:19.577 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.577 CC module/event/subsystems/sock/sock.o 00:03:19.577 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.577 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.577 CC module/event/subsystems/vmd/vmd.o 00:03:19.577 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.577 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.835 LIB libspdk_event_sock.a 00:03:19.835 LIB libspdk_event_iobuf.a 00:03:19.835 LIB libspdk_event_vmd.a 00:03:19.835 LIB libspdk_event_vhost_blk.a 00:03:19.835 LIB libspdk_event_scheduler.a 00:03:19.835 CC module/event/subsystems/accel/accel.o 00:03:20.093 LIB libspdk_event_accel.a 00:03:20.352 CC module/event/subsystems/bdev/bdev.o 00:03:20.352 LIB libspdk_event_bdev.a 00:03:20.610 CC module/event/subsystems/nbd/nbd.o 00:03:20.610 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.610 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.610 CC module/event/subsystems/scsi/scsi.o 00:03:20.868 LIB libspdk_event_nbd.a 00:03:20.868 LIB libspdk_event_scsi.a 00:03:20.868 LIB libspdk_event_nvmf.a 00:03:20.868 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.868 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:21.126 LIB libspdk_event_vhost_scsi.a 00:03:21.126 LIB libspdk_event_iscsi.a 00:03:21.384 CXX app/trace/trace.o 00:03:21.384 CC app/trace_record/trace_record.o 00:03:21.384 CC examples/ioat/perf/perf.o 00:03:21.384 CC examples/accel/perf/accel_perf.o 00:03:21.384 CC app/nvmf_tgt/nvmf_main.o 00:03:21.384 CC examples/nvme/hello_world/hello_world.o 00:03:21.384 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.384 CC test/accel/dif/dif.o 00:03:21.384 CC examples/blob/hello_world/hello_blob.o 00:03:21.384 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.642 LINK nvmf_tgt 00:03:21.642 LINK ioat_perf 00:03:21.642 LINK iscsi_tgt 00:03:21.642 LINK spdk_trace_record 00:03:21.642 LINK hello_world 00:03:21.642 LINK hello_blob 00:03:21.642 LINK hello_bdev 00:03:21.642 LINK spdk_trace 00:03:21.900 LINK dif 00:03:21.900 LINK accel_perf 00:03:22.158 CC examples/blob/cli/blobcli.o 00:03:22.158 CC examples/ioat/verify/verify.o 00:03:22.158 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.416 LINK verify 00:03:22.674 LINK blobcli 00:03:22.674 CC examples/nvme/reconnect/reconnect.o 00:03:22.932 CC examples/sock/hello_world/hello_sock.o 00:03:23.190 LINK bdevperf 00:03:23.190 LINK hello_sock 00:03:23.190 CC examples/vmd/lsvmd/lsvmd.o 00:03:23.190 LINK reconnect 00:03:23.190 LINK lsvmd 00:03:24.124 CC examples/vmd/led/led.o 00:03:24.124 LINK led 00:03:24.382 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.640 CC test/app/bdev_svc/bdev_svc.o 00:03:24.640 CC examples/nvmf/nvmf/nvmf.o 00:03:24.640 CC examples/util/zipf/zipf.o 00:03:24.640 CC examples/nvme/arbitration/arbitration.o 00:03:24.640 LINK bdev_svc 00:03:24.640 CC examples/nvme/hotplug/hotplug.o 00:03:24.898 LINK zipf 00:03:24.898 CC app/spdk_tgt/spdk_tgt.o 00:03:24.898 LINK nvme_manage 00:03:24.898 LINK nvmf 00:03:25.156 LINK hotplug 00:03:25.156 LINK spdk_tgt 00:03:25.156 LINK arbitration 00:03:25.413 CC examples/thread/thread/thread_ex.o 00:03:25.413 CC test/bdev/bdevio/bdevio.o 00:03:25.671 LINK thread 00:03:25.671 CC test/blobfs/mkfs/mkfs.o 00:03:25.929 LINK bdevio 00:03:25.929 LINK mkfs 00:03:26.188 TEST_HEADER include/spdk/accel.h 00:03:26.188 TEST_HEADER include/spdk/accel_module.h 00:03:26.188 TEST_HEADER include/spdk/assert.h 00:03:26.188 TEST_HEADER include/spdk/barrier.h 00:03:26.188 TEST_HEADER include/spdk/base64.h 00:03:26.188 TEST_HEADER include/spdk/bdev.h 00:03:26.188 TEST_HEADER include/spdk/bdev_module.h 00:03:26.188 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.188 TEST_HEADER include/spdk/bit_array.h 00:03:26.188 TEST_HEADER include/spdk/bit_pool.h 00:03:26.188 TEST_HEADER include/spdk/blob.h 00:03:26.188 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.188 TEST_HEADER include/spdk/blobfs.h 00:03:26.188 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.188 TEST_HEADER include/spdk/conf.h 00:03:26.188 TEST_HEADER include/spdk/config.h 00:03:26.188 TEST_HEADER include/spdk/cpuset.h 00:03:26.188 TEST_HEADER include/spdk/crc16.h 00:03:26.188 TEST_HEADER include/spdk/crc32.h 00:03:26.188 TEST_HEADER include/spdk/crc64.h 00:03:26.188 TEST_HEADER include/spdk/dif.h 00:03:26.188 TEST_HEADER include/spdk/dma.h 00:03:26.188 TEST_HEADER include/spdk/endian.h 00:03:26.188 TEST_HEADER include/spdk/env.h 00:03:26.188 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.188 TEST_HEADER include/spdk/event.h 00:03:26.188 TEST_HEADER include/spdk/fd.h 00:03:26.188 TEST_HEADER include/spdk/fd_group.h 00:03:26.188 TEST_HEADER include/spdk/file.h 00:03:26.188 TEST_HEADER include/spdk/ftl.h 00:03:26.188 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.188 TEST_HEADER include/spdk/hexlify.h 00:03:26.188 TEST_HEADER include/spdk/histogram_data.h 00:03:26.188 TEST_HEADER include/spdk/idxd.h 00:03:26.188 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.188 TEST_HEADER include/spdk/init.h 00:03:26.188 TEST_HEADER include/spdk/ioat.h 00:03:26.188 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.188 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.188 TEST_HEADER include/spdk/json.h 00:03:26.188 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.188 TEST_HEADER include/spdk/likely.h 00:03:26.188 TEST_HEADER include/spdk/log.h 00:03:26.188 TEST_HEADER include/spdk/lvol.h 00:03:26.188 TEST_HEADER include/spdk/memory.h 00:03:26.188 TEST_HEADER include/spdk/mmio.h 00:03:26.188 TEST_HEADER include/spdk/nbd.h 00:03:26.188 TEST_HEADER include/spdk/notify.h 00:03:26.188 TEST_HEADER include/spdk/nvme.h 00:03:26.188 TEST_HEADER include/spdk/nvme_intel.h 00:03:26.188 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:26.188 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:26.188 TEST_HEADER include/spdk/nvme_spec.h 00:03:26.188 TEST_HEADER include/spdk/nvme_zns.h 00:03:26.188 TEST_HEADER include/spdk/nvmf.h 00:03:26.188 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:26.188 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:26.188 TEST_HEADER include/spdk/nvmf_spec.h 00:03:26.188 TEST_HEADER include/spdk/nvmf_transport.h 00:03:26.188 TEST_HEADER include/spdk/opal.h 00:03:26.188 TEST_HEADER include/spdk/opal_spec.h 00:03:26.188 TEST_HEADER include/spdk/pci_ids.h 00:03:26.188 TEST_HEADER include/spdk/pipe.h 00:03:26.188 TEST_HEADER include/spdk/queue.h 00:03:26.188 TEST_HEADER include/spdk/reduce.h 00:03:26.188 TEST_HEADER include/spdk/rpc.h 00:03:26.188 TEST_HEADER include/spdk/scheduler.h 00:03:26.188 TEST_HEADER include/spdk/scsi.h 00:03:26.188 TEST_HEADER include/spdk/scsi_spec.h 00:03:26.188 TEST_HEADER include/spdk/sock.h 00:03:26.188 TEST_HEADER include/spdk/stdinc.h 00:03:26.188 TEST_HEADER include/spdk/string.h 00:03:26.188 TEST_HEADER include/spdk/thread.h 00:03:26.188 TEST_HEADER include/spdk/trace.h 00:03:26.188 TEST_HEADER include/spdk/trace_parser.h 00:03:26.188 TEST_HEADER include/spdk/tree.h 00:03:26.188 TEST_HEADER include/spdk/ublk.h 00:03:26.188 TEST_HEADER include/spdk/util.h 00:03:26.188 TEST_HEADER include/spdk/uuid.h 00:03:26.188 TEST_HEADER include/spdk/version.h 00:03:26.188 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:26.188 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:26.188 TEST_HEADER include/spdk/vhost.h 00:03:26.188 TEST_HEADER include/spdk/vmd.h 00:03:26.188 TEST_HEADER include/spdk/xor.h 00:03:26.188 TEST_HEADER include/spdk/zipf.h 00:03:26.188 CXX test/cpp_headers/accel.o 00:03:26.188 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.188 CC examples/idxd/perf/perf.o 00:03:26.447 CC test/dma/test_dma/test_dma.o 00:03:26.447 LINK cmb_copy 00:03:26.447 CXX test/cpp_headers/accel_module.o 00:03:26.705 CXX test/cpp_headers/assert.o 00:03:26.705 LINK idxd_perf 00:03:26.705 LINK test_dma 00:03:26.705 CXX test/cpp_headers/barrier.o 00:03:26.964 CXX test/cpp_headers/base64.o 00:03:27.222 CXX test/cpp_headers/bdev.o 00:03:27.222 CC examples/nvme/abort/abort.o 00:03:27.222 CXX test/cpp_headers/bdev_module.o 00:03:27.479 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:27.738 CXX test/cpp_headers/bdev_zone.o 00:03:27.738 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.738 LINK abort 00:03:27.997 LINK pmr_persistence 00:03:28.255 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:28.255 CXX test/cpp_headers/bit_array.o 00:03:28.255 LINK nvme_fuzz 00:03:28.514 CXX test/cpp_headers/bit_pool.o 00:03:28.514 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:28.514 CC app/spdk_lspci/spdk_lspci.o 00:03:28.774 CC test/env/mem_callbacks/mem_callbacks.o 00:03:28.774 LINK spdk_lspci 00:03:28.774 CXX test/cpp_headers/blob.o 00:03:28.774 CC test/env/vtophys/vtophys.o 00:03:29.033 LINK interrupt_tgt 00:03:29.033 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:29.033 CXX test/cpp_headers/blob_bdev.o 00:03:29.033 LINK vtophys 00:03:29.033 CC test/env/memory/memory_ut.o 00:03:29.033 LINK env_dpdk_post_init 00:03:29.290 CXX test/cpp_headers/blobfs.o 00:03:29.290 LINK mem_callbacks 00:03:29.290 CXX test/cpp_headers/blobfs_bdev.o 00:03:29.547 CXX test/cpp_headers/conf.o 00:03:29.547 CXX test/cpp_headers/config.o 00:03:29.547 CXX test/cpp_headers/cpuset.o 00:03:29.547 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:29.804 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:29.804 CXX test/cpp_headers/crc16.o 00:03:29.804 CXX test/cpp_headers/crc32.o 00:03:29.804 CXX test/cpp_headers/crc64.o 00:03:29.804 CC test/env/pci/pci_ut.o 00:03:29.804 LINK memory_ut 00:03:30.061 CXX test/cpp_headers/dif.o 00:03:30.061 CXX test/cpp_headers/dma.o 00:03:30.061 CC app/spdk_nvme_perf/perf.o 00:03:30.061 CC test/event/event_perf/event_perf.o 00:03:30.061 CXX test/cpp_headers/endian.o 00:03:30.061 CC app/spdk_nvme_identify/identify.o 00:03:30.061 LINK event_perf 00:03:30.061 LINK vhost_fuzz 00:03:30.061 CC app/spdk_nvme_discover/discovery_aer.o 00:03:30.318 CXX test/cpp_headers/env.o 00:03:30.318 CXX test/cpp_headers/env_dpdk.o 00:03:30.318 LINK iscsi_fuzz 00:03:30.318 LINK pci_ut 00:03:30.318 CXX test/cpp_headers/event.o 00:03:30.318 LINK spdk_nvme_discover 00:03:30.318 CC app/spdk_top/spdk_top.o 00:03:30.575 CXX test/cpp_headers/fd.o 00:03:30.575 CXX test/cpp_headers/fd_group.o 00:03:30.575 CXX test/cpp_headers/file.o 00:03:30.832 CXX test/cpp_headers/ftl.o 00:03:30.832 CXX test/cpp_headers/gpt_spec.o 00:03:30.832 CXX test/cpp_headers/hexlify.o 00:03:30.832 CXX test/cpp_headers/histogram_data.o 00:03:30.832 LINK spdk_nvme_perf 00:03:31.090 CC test/event/reactor/reactor.o 00:03:31.090 CC test/lvol/esnap/esnap.o 00:03:31.090 CXX test/cpp_headers/idxd.o 00:03:31.090 CC test/event/reactor_perf/reactor_perf.o 00:03:31.090 CC test/event/app_repeat/app_repeat.o 00:03:31.090 LINK spdk_nvme_identify 00:03:31.090 LINK reactor 00:03:31.347 CXX test/cpp_headers/idxd_spec.o 00:03:31.347 LINK reactor_perf 00:03:31.347 LINK app_repeat 00:03:31.347 CXX test/cpp_headers/init.o 00:03:31.347 CC test/app/histogram_perf/histogram_perf.o 00:03:31.347 LINK spdk_top 00:03:31.604 CXX test/cpp_headers/ioat.o 00:03:31.604 LINK histogram_perf 00:03:31.604 CC app/vhost/vhost.o 00:03:31.604 CXX test/cpp_headers/ioat_spec.o 00:03:31.867 LINK vhost 00:03:31.867 CXX test/cpp_headers/iscsi_spec.o 00:03:31.867 CXX test/cpp_headers/json.o 00:03:31.867 CC test/event/scheduler/scheduler.o 00:03:32.126 CC app/spdk_dd/spdk_dd.o 00:03:32.126 CXX test/cpp_headers/jsonrpc.o 00:03:32.126 CXX test/cpp_headers/likely.o 00:03:32.126 CXX test/cpp_headers/log.o 00:03:32.126 LINK scheduler 00:03:32.126 CXX test/cpp_headers/lvol.o 00:03:32.383 CXX test/cpp_headers/memory.o 00:03:32.383 CC app/fio/nvme/fio_plugin.o 00:03:32.383 CXX test/cpp_headers/mmio.o 00:03:32.383 CC test/app/jsoncat/jsoncat.o 00:03:32.383 CXX test/cpp_headers/nbd.o 00:03:32.383 CXX test/cpp_headers/notify.o 00:03:32.383 CXX test/cpp_headers/nvme.o 00:03:32.383 CXX test/cpp_headers/nvme_intel.o 00:03:32.383 LINK spdk_dd 00:03:32.383 LINK jsoncat 00:03:32.383 CXX test/cpp_headers/nvme_ocssd.o 00:03:32.641 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:32.641 CXX test/cpp_headers/nvme_spec.o 00:03:32.641 CC test/app/stub/stub.o 00:03:32.641 CXX test/cpp_headers/nvme_zns.o 00:03:32.641 CXX test/cpp_headers/nvmf.o 00:03:32.899 LINK stub 00:03:32.899 CC app/fio/bdev/fio_plugin.o 00:03:32.899 LINK spdk_nvme 00:03:32.899 CXX test/cpp_headers/nvmf_cmd.o 00:03:32.899 CC test/nvme/aer/aer.o 00:03:33.155 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.155 CC test/nvme/reset/reset.o 00:03:33.412 CXX test/cpp_headers/nvmf_spec.o 00:03:33.412 LINK aer 00:03:33.412 LINK spdk_bdev 00:03:33.412 CXX test/cpp_headers/nvmf_transport.o 00:03:33.412 LINK reset 00:03:33.669 CXX test/cpp_headers/opal.o 00:03:33.926 CXX test/cpp_headers/opal_spec.o 00:03:33.926 CC test/nvme/sgl/sgl.o 00:03:33.926 CC test/rpc_client/rpc_client_test.o 00:03:33.926 CXX test/cpp_headers/pci_ids.o 00:03:34.183 CXX test/cpp_headers/pipe.o 00:03:34.183 LINK rpc_client_test 00:03:34.183 LINK sgl 00:03:34.440 CXX test/cpp_headers/queue.o 00:03:34.440 CXX test/cpp_headers/reduce.o 00:03:34.440 CXX test/cpp_headers/rpc.o 00:03:34.697 CXX test/cpp_headers/scheduler.o 00:03:34.697 CC test/nvme/e2edp/nvme_dp.o 00:03:34.697 CC test/nvme/overhead/overhead.o 00:03:34.697 CXX test/cpp_headers/scsi.o 00:03:35.002 CXX test/cpp_headers/scsi_spec.o 00:03:35.002 LINK nvme_dp 00:03:35.002 LINK overhead 00:03:35.002 CC test/nvme/err_injection/err_injection.o 00:03:35.002 CC test/nvme/startup/startup.o 00:03:35.002 CXX test/cpp_headers/sock.o 00:03:35.002 CC test/nvme/reserve/reserve.o 00:03:35.260 LINK startup 00:03:35.260 LINK err_injection 00:03:35.260 CXX test/cpp_headers/stdinc.o 00:03:35.260 LINK reserve 00:03:35.260 CXX test/cpp_headers/string.o 00:03:35.518 CC test/nvme/simple_copy/simple_copy.o 00:03:35.518 CC test/nvme/connect_stress/connect_stress.o 00:03:35.518 CXX test/cpp_headers/thread.o 00:03:35.518 LINK connect_stress 00:03:35.776 LINK simple_copy 00:03:35.776 CXX test/cpp_headers/trace.o 00:03:35.776 CXX test/cpp_headers/trace_parser.o 00:03:36.035 CXX test/cpp_headers/tree.o 00:03:36.035 CC test/nvme/boot_partition/boot_partition.o 00:03:36.293 CXX test/cpp_headers/ublk.o 00:03:36.293 CC test/thread/poller_perf/poller_perf.o 00:03:36.293 CXX test/cpp_headers/util.o 00:03:36.293 LINK boot_partition 00:03:36.293 LINK poller_perf 00:03:36.293 CXX test/cpp_headers/uuid.o 00:03:36.293 CC test/thread/lock/spdk_lock.o 00:03:36.293 CC test/nvme/compliance/nvme_compliance.o 00:03:36.552 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.552 CXX test/cpp_headers/version.o 00:03:36.552 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.810 CC test/nvme/fdp/fdp.o 00:03:36.810 LINK doorbell_aers 00:03:36.810 LINK fused_ordering 00:03:36.810 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.810 LINK esnap 00:03:36.810 LINK nvme_compliance 00:03:36.810 CXX test/cpp_headers/vfio_user_spec.o 00:03:37.069 CXX test/cpp_headers/vhost.o 00:03:37.069 CC test/nvme/cuse/cuse.o 00:03:37.069 LINK fdp 00:03:37.069 CXX test/cpp_headers/vmd.o 00:03:37.069 CXX test/cpp_headers/xor.o 00:03:37.327 CXX test/cpp_headers/zipf.o 00:03:37.327 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:37.586 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:37.586 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:37.586 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:37.586 LINK histogram_ut 00:03:37.844 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:37.844 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:37.844 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:37.844 LINK cuse 00:03:38.103 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:38.103 LINK blob_bdev_ut 00:03:38.103 LINK scsi_nvme_ut 00:03:38.103 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:38.103 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:38.362 LINK spdk_lock 00:03:38.362 LINK gpt_ut 00:03:38.362 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:38.362 LINK tree_ut 00:03:38.621 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:38.621 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:38.621 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:39.188 LINK bdev_zone_ut 00:03:39.188 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:39.188 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:39.188 LINK vbdev_lvol_ut 00:03:39.448 LINK blobfs_bdev_ut 00:03:39.448 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:39.707 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:39.966 LINK accel_ut 00:03:39.966 LINK blobfs_async_ut 00:03:40.226 LINK blobfs_sync_ut 00:03:40.226 LINK vbdev_zone_block_ut 00:03:40.226 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:40.484 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:40.484 LINK bdev_raid_ut 00:03:40.743 CC test/unit/lib/event/app.c/app_ut.o 00:03:40.743 LINK dma_ut 00:03:40.743 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:41.001 LINK bdev_raid_sb_ut 00:03:41.001 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:41.001 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:41.260 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:41.260 LINK app_ut 00:03:41.260 LINK ioat_ut 00:03:41.519 LINK part_ut 00:03:41.519 LINK concat_ut 00:03:41.519 LINK reactor_ut 00:03:41.519 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:41.778 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:41.778 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:41.778 LINK raid1_ut 00:03:41.778 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:42.037 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:42.037 CC test/unit/lib/log/log.c/log_ut.o 00:03:42.295 LINK bdev_ut 00:03:42.295 LINK json_util_ut 00:03:42.295 LINK jsonrpc_server_ut 00:03:42.295 LINK log_ut 00:03:42.554 LINK json_write_ut 00:03:42.554 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:42.554 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:42.554 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:42.554 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:42.814 LINK conn_ut 00:03:42.814 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:43.074 LINK init_grp_ut 00:03:43.074 LINK notify_ut 00:03:43.074 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:43.074 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:43.333 LINK bdev_ut 00:03:43.333 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:43.591 LINK param_ut 00:03:43.591 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:43.850 LINK portal_grp_ut 00:03:43.850 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:44.108 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:44.108 LINK nvme_ut 00:03:44.108 LINK bdev_nvme_ut 00:03:44.108 LINK json_parse_ut 00:03:44.367 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:44.367 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:44.367 LINK lvol_ut 00:03:44.626 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:44.626 LINK dev_ut 00:03:44.626 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:44.885 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:45.144 LINK iscsi_ut 00:03:45.144 LINK blob_ut 00:03:45.402 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:45.402 LINK lun_ut 00:03:45.402 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:45.660 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:45.660 LINK nvme_ns_ut 00:03:45.660 LINK nvme_ctrlr_cmd_ut 00:03:45.660 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:45.918 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:45.918 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:45.918 LINK subsystem_ut 00:03:45.918 LINK scsi_ut 00:03:45.918 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:46.175 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:46.175 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:46.175 LINK tgt_node_ut 00:03:46.433 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:46.690 LINK ctrlr_ut 00:03:46.948 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:46.948 LINK posix_ut 00:03:46.948 LINK sock_ut 00:03:47.206 LINK scsi_bdev_ut 00:03:47.206 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:47.206 LINK nvme_ns_ocssd_cmd_ut 00:03:47.464 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:47.464 LINK tcp_ut 00:03:47.464 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:47.464 LINK nvme_ctrlr_ut 00:03:47.464 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:47.464 LINK nvme_ns_cmd_ut 00:03:47.722 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:47.722 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:47.980 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:47.980 LINK ctrlr_bdev_ut 00:03:47.980 LINK scsi_pr_ut 00:03:47.980 LINK ctrlr_discovery_ut 00:03:48.239 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:48.239 LINK nvme_pcie_ut 00:03:48.239 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:48.239 LINK nvme_quirks_ut 00:03:48.497 LINK nvme_poll_group_ut 00:03:48.497 LINK nvmf_ut 00:03:48.497 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:48.497 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:48.497 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:48.755 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:48.755 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:49.014 LINK nvme_qpair_ut 00:03:49.014 LINK base64_ut 00:03:49.014 LINK nvme_transport_ut 00:03:49.272 LINK nvme_io_msg_ut 00:03:49.272 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:49.272 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:49.272 LINK iobuf_ut 00:03:49.272 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:49.529 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:49.529 LINK pci_event_ut 00:03:49.786 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:49.786 LINK bit_array_ut 00:03:49.786 LINK subsystem_ut 00:03:49.786 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:49.786 LINK nvme_pcie_common_ut 00:03:50.045 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:50.045 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:50.304 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:50.304 LINK cpuset_ut 00:03:50.304 LINK nvme_fabric_ut 00:03:50.567 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:50.567 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:50.831 LINK crc16_ut 00:03:50.832 LINK crc32c_ut 00:03:50.832 LINK crc32_ieee_ut 00:03:50.832 LINK nvme_opal_ut 00:03:51.090 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:51.090 LINK nvme_tcp_ut 00:03:51.090 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:51.090 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:51.090 LINK thread_ut 00:03:51.090 LINK transport_ut 00:03:51.090 LINK crc64_ut 00:03:51.090 CC test/unit/lib/util/math.c/math_ut.o 00:03:51.090 LINK rdma_ut 00:03:51.349 LINK iov_ut 00:03:51.349 LINK math_ut 00:03:51.349 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:51.349 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:51.349 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:51.607 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:51.607 CC test/unit/lib/util/string.c/string_ut.o 00:03:51.607 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:51.607 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:51.607 LINK nvme_cuse_ut 00:03:51.864 LINK rpc_ut 00:03:51.864 LINK idxd_user_ut 00:03:51.864 LINK string_ut 00:03:51.864 LINK xor_ut 00:03:52.122 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:52.122 LINK pipe_ut 00:03:52.122 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:52.122 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:52.122 LINK nvme_rdma_ut 00:03:52.122 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:52.122 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:52.380 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:52.380 LINK dif_ut 00:03:52.380 LINK idxd_ut 00:03:52.380 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:52.380 LINK ftl_bitmap_ut 00:03:52.380 LINK ftl_l2p_ut 00:03:52.380 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:52.380 LINK common_ut 00:03:52.638 LINK ftl_mempool_ut 00:03:52.638 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:52.638 LINK ftl_io_ut 00:03:52.896 LINK ftl_mngt_ut 00:03:53.154 LINK ftl_band_ut 00:03:53.412 LINK vhost_ut 00:03:53.669 LINK ftl_sb_ut 00:03:53.669 LINK ftl_layout_upgrade_ut 00:03:53.928 00:03:53.928 real 1m48.882s 00:03:53.928 user 9m39.838s 00:03:53.928 sys 1m45.759s 00:03:53.928 20:46:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:53.928 20:46:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.928 ************************************ 00:03:53.928 END TEST unittest_build 00:03:53.928 ************************************ 00:03:53.928 20:46:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:53.928 20:46:22 -- nvmf/common.sh@7 -- # uname -s 00:03:53.928 20:46:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.928 20:46:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.928 20:46:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.928 20:46:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.928 20:46:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.928 20:46:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.928 20:46:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.928 20:46:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.928 20:46:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.928 20:46:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.928 20:46:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:14866646-40f4-46bd-b671-55d6ed931b6a 00:03:53.928 20:46:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=14866646-40f4-46bd-b671-55d6ed931b6a 00:03:53.928 20:46:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.928 20:46:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.928 20:46:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:53.928 20:46:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:53.928 20:46:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.928 20:46:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.928 20:46:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.928 20:46:22 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:53.928 20:46:22 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:53.928 20:46:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:53.928 20:46:22 -- paths/export.sh@5 -- # export PATH 00:03:53.928 20:46:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:53.928 20:46:22 -- nvmf/common.sh@46 -- # : 0 00:03:53.928 20:46:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:53.928 20:46:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:53.928 20:46:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:53.928 20:46:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.928 20:46:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.928 20:46:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:53.928 20:46:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:53.928 20:46:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:53.928 20:46:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:53.928 20:46:22 -- spdk/autotest.sh@32 -- # uname -s 00:03:53.928 20:46:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:54.185 20:46:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:54.185 20:46:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:54.185 20:46:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:54.185 20:46:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:54.185 20:46:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:54.185 20:46:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:54.185 20:46:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:54.185 20:46:22 -- spdk/autotest.sh@48 -- # udevadm_pid=92401 00:03:54.185 20:46:22 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:54.185 20:46:22 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:54.185 20:46:22 -- spdk/autotest.sh@54 -- # echo 92410 00:03:54.185 20:46:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:54.185 20:46:22 -- spdk/autotest.sh@56 -- # echo 92411 00:03:54.185 20:46:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:54.185 20:46:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:54.185 20:46:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:54.185 20:46:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:54.185 20:46:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:54.185 20:46:22 -- common/autotest_common.sh@10 -- # set +x 00:03:54.185 20:46:22 -- spdk/autotest.sh@70 -- # create_test_list 00:03:54.185 20:46:22 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:54.185 20:46:22 -- common/autotest_common.sh@10 -- # set +x 00:03:54.185 20:46:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:54.185 20:46:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:54.185 20:46:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:54.185 20:46:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:54.185 20:46:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:54.185 20:46:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:54.185 20:46:22 -- common/autotest_common.sh@1440 -- # uname 00:03:54.185 20:46:22 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:54.185 20:46:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:54.185 20:46:22 -- common/autotest_common.sh@1460 -- # uname 00:03:54.185 20:46:22 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:54.185 20:46:22 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:54.185 20:46:22 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:54.185 20:46:22 -- spdk/autotest.sh@83 -- # hash lcov 00:03:54.185 20:46:22 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:54.185 20:46:22 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:54.185 --rc lcov_branch_coverage=1 00:03:54.185 --rc lcov_function_coverage=1 00:03:54.185 --rc genhtml_branch_coverage=1 00:03:54.185 --rc genhtml_function_coverage=1 00:03:54.185 --rc genhtml_legend=1 00:03:54.185 --rc geninfo_all_blocks=1 00:03:54.185 ' 00:03:54.185 20:46:22 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:54.185 --rc lcov_branch_coverage=1 00:03:54.185 --rc lcov_function_coverage=1 00:03:54.185 --rc genhtml_branch_coverage=1 00:03:54.185 --rc genhtml_function_coverage=1 00:03:54.185 --rc genhtml_legend=1 00:03:54.185 --rc geninfo_all_blocks=1 00:03:54.185 ' 00:03:54.185 20:46:22 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:54.185 --rc lcov_branch_coverage=1 00:03:54.185 --rc lcov_function_coverage=1 00:03:54.185 --rc genhtml_branch_coverage=1 00:03:54.185 --rc genhtml_function_coverage=1 00:03:54.185 --rc genhtml_legend=1 00:03:54.185 --rc geninfo_all_blocks=1 00:03:54.185 --no-external' 00:03:54.185 20:46:22 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:54.185 --rc lcov_branch_coverage=1 00:03:54.185 --rc lcov_function_coverage=1 00:03:54.185 --rc genhtml_branch_coverage=1 00:03:54.185 --rc genhtml_function_coverage=1 00:03:54.185 --rc genhtml_legend=1 00:03:54.185 --rc geninfo_all_blocks=1 00:03:54.185 --no-external' 00:03:54.185 20:46:22 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:54.185 lcov: LCOV version 1.15 00:03:54.185 20:46:22 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:09.082 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:09.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:09.082 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:09.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:09.082 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:09.082 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:35.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:35.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:35.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:35.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:35.617 20:47:03 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:35.617 20:47:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:35.617 20:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:35.617 20:47:03 -- spdk/autotest.sh@102 -- # rm -f 00:04:35.617 20:47:03 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:35.876 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:35.876 20:47:03 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:35.876 20:47:03 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:35.876 20:47:03 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:35.876 20:47:03 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:35.876 20:47:03 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:35.876 20:47:03 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:35.876 20:47:03 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:35.876 20:47:03 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.876 20:47:03 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:35.876 20:47:03 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:35.876 20:47:03 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:35.876 20:47:03 -- spdk/autotest.sh@121 -- # grep -v p 00:04:35.876 20:47:03 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.876 20:47:03 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:35.876 20:47:03 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:35.876 20:47:03 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:35.876 20:47:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.876 No valid GPT data, bailing 00:04:35.876 20:47:04 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.134 20:47:04 -- scripts/common.sh@393 -- # pt= 00:04:36.134 20:47:04 -- scripts/common.sh@394 -- # return 1 00:04:36.134 20:47:04 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.134 1+0 records in 00:04:36.134 1+0 records out 00:04:36.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442015 s, 237 MB/s 00:04:36.134 20:47:04 -- spdk/autotest.sh@129 -- # sync 00:04:36.134 20:47:04 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.134 20:47:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.134 20:47:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.508 20:47:05 -- spdk/autotest.sh@135 -- # uname -s 00:04:37.508 20:47:05 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:37.508 20:47:05 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.508 20:47:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.508 20:47:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.508 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:37.508 ************************************ 00:04:37.508 START TEST setup.sh 00:04:37.508 ************************************ 00:04:37.508 20:47:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.508 * Looking for test storage... 00:04:37.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.508 20:47:05 -- setup/test-setup.sh@10 -- # uname -s 00:04:37.508 20:47:05 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.508 20:47:05 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.508 20:47:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.508 20:47:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.508 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:37.508 ************************************ 00:04:37.508 START TEST acl 00:04:37.508 ************************************ 00:04:37.508 20:47:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.508 * Looking for test storage... 00:04:37.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.508 20:47:05 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.508 20:47:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:37.508 20:47:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:37.508 20:47:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:37.508 20:47:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:37.508 20:47:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:37.508 20:47:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:37.508 20:47:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.508 20:47:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:37.508 20:47:05 -- setup/acl.sh@12 -- # devs=() 00:04:37.508 20:47:05 -- setup/acl.sh@12 -- # declare -a devs 00:04:37.508 20:47:05 -- setup/acl.sh@13 -- # drivers=() 00:04:37.508 20:47:05 -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.508 20:47:05 -- setup/acl.sh@51 -- # setup reset 00:04:37.508 20:47:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.508 20:47:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.075 20:47:06 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:38.075 20:47:06 -- setup/acl.sh@16 -- # local dev driver 00:04:38.075 20:47:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.075 20:47:06 -- setup/acl.sh@15 -- # setup output status 00:04:38.075 20:47:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.075 20:47:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.075 Hugepages 00:04:38.075 node hugesize free / total 00:04:38.075 20:47:06 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:38.075 20:47:06 -- setup/acl.sh@19 -- # continue 00:04:38.075 20:47:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.075 00:04:38.075 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.075 20:47:06 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:38.075 20:47:06 -- setup/acl.sh@19 -- # continue 00:04:38.075 20:47:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.333 20:47:06 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:38.333 20:47:06 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:38.333 20:47:06 -- setup/acl.sh@20 -- # continue 00:04:38.333 20:47:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.333 20:47:06 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:38.333 20:47:06 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:38.333 20:47:06 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:38.333 20:47:06 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:38.333 20:47:06 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:38.333 20:47:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.333 20:47:06 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:38.333 20:47:06 -- setup/acl.sh@54 -- # run_test denied denied 00:04:38.333 20:47:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.333 20:47:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.333 20:47:06 -- common/autotest_common.sh@10 -- # set +x 00:04:38.333 ************************************ 00:04:38.333 START TEST denied 00:04:38.333 ************************************ 00:04:38.333 20:47:06 -- common/autotest_common.sh@1104 -- # denied 00:04:38.333 20:47:06 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:38.333 20:47:06 -- setup/acl.sh@38 -- # setup output config 00:04:38.333 20:47:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.333 20:47:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.333 20:47:06 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:39.708 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:39.708 20:47:07 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:39.708 20:47:07 -- setup/acl.sh@28 -- # local dev driver 00:04:39.708 20:47:07 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:39.708 20:47:07 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:39.708 20:47:07 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:39.708 20:47:07 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:39.708 20:47:07 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:39.708 20:47:07 -- setup/acl.sh@41 -- # setup reset 00:04:39.708 20:47:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.708 20:47:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.275 00:04:40.275 real 0m1.835s 00:04:40.275 user 0m0.495s 00:04:40.275 sys 0m1.386s 00:04:40.275 20:47:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.275 20:47:08 -- common/autotest_common.sh@10 -- # set +x 00:04:40.275 ************************************ 00:04:40.275 END TEST denied 00:04:40.275 ************************************ 00:04:40.275 20:47:08 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:40.275 20:47:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.275 20:47:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.275 20:47:08 -- common/autotest_common.sh@10 -- # set +x 00:04:40.275 ************************************ 00:04:40.275 START TEST allowed 00:04:40.275 ************************************ 00:04:40.275 20:47:08 -- common/autotest_common.sh@1104 -- # allowed 00:04:40.275 20:47:08 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:40.275 20:47:08 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:40.275 20:47:08 -- setup/acl.sh@45 -- # setup output config 00:04:40.275 20:47:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.275 20:47:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.653 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.653 20:47:09 -- setup/acl.sh@47 -- # verify 00:04:41.653 20:47:09 -- setup/acl.sh@28 -- # local dev driver 00:04:41.653 20:47:09 -- setup/acl.sh@48 -- # setup reset 00:04:41.653 20:47:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.653 20:47:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.220 00:04:42.220 real 0m1.942s 00:04:42.220 user 0m0.495s 00:04:42.220 sys 0m1.456s 00:04:42.220 20:47:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.220 ************************************ 00:04:42.220 END TEST allowed 00:04:42.220 20:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:42.220 ************************************ 00:04:42.220 00:04:42.220 real 0m4.716s 00:04:42.220 user 0m1.564s 00:04:42.220 sys 0m3.257s 00:04:42.220 20:47:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.220 20:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:42.220 ************************************ 00:04:42.220 END TEST acl 00:04:42.220 ************************************ 00:04:42.221 20:47:10 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.221 20:47:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.221 20:47:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.221 20:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:42.221 ************************************ 00:04:42.221 START TEST hugepages 00:04:42.221 ************************************ 00:04:42.221 20:47:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.481 * Looking for test storage... 00:04:42.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.481 20:47:10 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:42.481 20:47:10 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:42.481 20:47:10 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:42.481 20:47:10 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:42.481 20:47:10 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:42.481 20:47:10 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:42.481 20:47:10 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:42.481 20:47:10 -- setup/common.sh@18 -- # local node= 00:04:42.481 20:47:10 -- setup/common.sh@19 -- # local var val 00:04:42.481 20:47:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.481 20:47:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.481 20:47:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.481 20:47:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.481 20:47:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.481 20:47:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 2990448 kB' 'MemAvailable: 7402568 kB' 'Buffers: 35528 kB' 'Cached: 4515932 kB' 'SwapCached: 0 kB' 'Active: 995260 kB' 'Inactive: 3673588 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 128024 kB' 'Active(file): 994228 kB' 'Inactive(file): 3545564 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 146760 kB' 'Mapped: 68096 kB' 'Shmem: 2600 kB' 'KReclaimable: 193844 kB' 'Slab: 257240 kB' 'SReclaimable: 193844 kB' 'SUnreclaim: 63396 kB' 'KernelStack: 4488 kB' 'PageTables: 3888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024328 kB' 'Committed_AS: 492012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.481 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.481 20:47:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # continue 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.482 20:47:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.482 20:47:10 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.482 20:47:10 -- setup/common.sh@33 -- # echo 2048 00:04:42.482 20:47:10 -- setup/common.sh@33 -- # return 0 00:04:42.482 20:47:10 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:42.482 20:47:10 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:42.482 20:47:10 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:42.482 20:47:10 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:42.482 20:47:10 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:42.482 20:47:10 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:42.482 20:47:10 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:42.482 20:47:10 -- setup/hugepages.sh@207 -- # get_nodes 00:04:42.482 20:47:10 -- setup/hugepages.sh@27 -- # local node 00:04:42.482 20:47:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.482 20:47:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:42.482 20:47:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.482 20:47:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.482 20:47:10 -- setup/hugepages.sh@208 -- # clear_hp 00:04:42.482 20:47:10 -- setup/hugepages.sh@37 -- # local node hp 00:04:42.482 20:47:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.482 20:47:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.482 20:47:10 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.482 20:47:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.482 20:47:10 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.482 20:47:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.482 20:47:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.482 20:47:10 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:42.482 20:47:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.482 20:47:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.482 20:47:10 -- common/autotest_common.sh@10 -- # set +x 00:04:42.482 ************************************ 00:04:42.482 START TEST default_setup 00:04:42.482 ************************************ 00:04:42.482 20:47:10 -- common/autotest_common.sh@1104 -- # default_setup 00:04:42.482 20:47:10 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:42.482 20:47:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.482 20:47:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.482 20:47:10 -- setup/hugepages.sh@51 -- # shift 00:04:42.482 20:47:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.482 20:47:10 -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.482 20:47:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.482 20:47:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.482 20:47:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.482 20:47:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.482 20:47:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.482 20:47:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.482 20:47:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.482 20:47:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.482 20:47:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.482 20:47:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.482 20:47:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.482 20:47:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.482 20:47:10 -- setup/hugepages.sh@73 -- # return 0 00:04:42.482 20:47:10 -- setup/hugepages.sh@137 -- # setup output 00:04:42.482 20:47:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.482 20:47:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:43.057 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.626 20:47:11 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:43.626 20:47:11 -- setup/hugepages.sh@89 -- # local node 00:04:43.626 20:47:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.626 20:47:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.626 20:47:11 -- setup/hugepages.sh@92 -- # local surp 00:04:43.626 20:47:11 -- setup/hugepages.sh@93 -- # local resv 00:04:43.626 20:47:11 -- setup/hugepages.sh@94 -- # local anon 00:04:43.626 20:47:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.626 20:47:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.626 20:47:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.626 20:47:11 -- setup/common.sh@18 -- # local node= 00:04:43.626 20:47:11 -- setup/common.sh@19 -- # local var val 00:04:43.626 20:47:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.626 20:47:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.626 20:47:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.626 20:47:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.626 20:47:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.626 20:47:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.626 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.626 20:47:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073808 kB' 'MemAvailable: 9485820 kB' 'Buffers: 35528 kB' 'Cached: 4515988 kB' 'SwapCached: 0 kB' 'Active: 995396 kB' 'Inactive: 3688560 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 143044 kB' 'Active(file): 994340 kB' 'Inactive(file): 3545516 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161676 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257164 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63492 kB' 'KernelStack: 4368 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:43.626 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.626 20:47:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.626 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.626 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.626 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.626 20:47:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.626 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.627 20:47:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.627 20:47:11 -- setup/common.sh@33 -- # echo 0 00:04:43.627 20:47:11 -- setup/common.sh@33 -- # return 0 00:04:43.627 20:47:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.627 20:47:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.627 20:47:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.627 20:47:11 -- setup/common.sh@18 -- # local node= 00:04:43.627 20:47:11 -- setup/common.sh@19 -- # local var val 00:04:43.627 20:47:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.627 20:47:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.627 20:47:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.627 20:47:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.627 20:47:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.627 20:47:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.627 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073808 kB' 'MemAvailable: 9485820 kB' 'Buffers: 35528 kB' 'Cached: 4515988 kB' 'SwapCached: 0 kB' 'Active: 995396 kB' 'Inactive: 3688560 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 143044 kB' 'Active(file): 994340 kB' 'Inactive(file): 3545516 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161676 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257164 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63492 kB' 'KernelStack: 4368 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.628 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.628 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.629 20:47:11 -- setup/common.sh@33 -- # echo 0 00:04:43.629 20:47:11 -- setup/common.sh@33 -- # return 0 00:04:43.629 20:47:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.629 20:47:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.629 20:47:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.629 20:47:11 -- setup/common.sh@18 -- # local node= 00:04:43.629 20:47:11 -- setup/common.sh@19 -- # local var val 00:04:43.629 20:47:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.629 20:47:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.629 20:47:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.629 20:47:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.629 20:47:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.629 20:47:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073808 kB' 'MemAvailable: 9485820 kB' 'Buffers: 35528 kB' 'Cached: 4515988 kB' 'SwapCached: 0 kB' 'Active: 995388 kB' 'Inactive: 3688636 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 143120 kB' 'Active(file): 994340 kB' 'Inactive(file): 3545516 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161676 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257180 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63508 kB' 'KernelStack: 4352 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.629 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.629 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.630 20:47:11 -- setup/common.sh@33 -- # echo 0 00:04:43.630 20:47:11 -- setup/common.sh@33 -- # return 0 00:04:43.630 20:47:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.630 nr_hugepages=1024 00:04:43.630 20:47:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.630 resv_hugepages=0 00:04:43.630 20:47:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.630 surplus_hugepages=0 00:04:43.630 20:47:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.630 anon_hugepages=0 00:04:43.630 20:47:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.630 20:47:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.630 20:47:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.630 20:47:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.630 20:47:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.630 20:47:11 -- setup/common.sh@18 -- # local node= 00:04:43.630 20:47:11 -- setup/common.sh@19 -- # local var val 00:04:43.630 20:47:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.630 20:47:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.630 20:47:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.630 20:47:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.630 20:47:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.630 20:47:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073808 kB' 'MemAvailable: 9485820 kB' 'Buffers: 35528 kB' 'Cached: 4515988 kB' 'SwapCached: 0 kB' 'Active: 995388 kB' 'Inactive: 3688376 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142860 kB' 'Active(file): 994340 kB' 'Inactive(file): 3545516 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161416 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257180 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63508 kB' 'KernelStack: 4420 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 509296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.630 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.630 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.631 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.631 20:47:11 -- setup/common.sh@33 -- # echo 1024 00:04:43.631 20:47:11 -- setup/common.sh@33 -- # return 0 00:04:43.631 20:47:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.631 20:47:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.631 20:47:11 -- setup/hugepages.sh@27 -- # local node 00:04:43.631 20:47:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.631 20:47:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.631 20:47:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.631 20:47:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.631 20:47:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.631 20:47:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.631 20:47:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.631 20:47:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.631 20:47:11 -- setup/common.sh@18 -- # local node=0 00:04:43.631 20:47:11 -- setup/common.sh@19 -- # local var val 00:04:43.631 20:47:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.631 20:47:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.631 20:47:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.631 20:47:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.631 20:47:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.631 20:47:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.631 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073808 kB' 'MemUsed: 7169156 kB' 'SwapCached: 0 kB' 'Active: 995388 kB' 'Inactive: 3688572 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 143056 kB' 'Active(file): 994340 kB' 'Inactive(file): 3545516 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'FilePages: 4551516 kB' 'Mapped: 67904 kB' 'AnonPages: 161612 kB' 'Shmem: 2596 kB' 'KernelStack: 4388 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193672 kB' 'Slab: 257180 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # continue 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.632 20:47:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.632 20:47:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.632 20:47:11 -- setup/common.sh@33 -- # echo 0 00:04:43.632 20:47:11 -- setup/common.sh@33 -- # return 0 00:04:43.632 20:47:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.632 20:47:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.632 20:47:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.632 20:47:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.632 node0=1024 expecting 1024 00:04:43.632 20:47:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.632 20:47:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.632 00:04:43.632 real 0m1.164s 00:04:43.632 user 0m0.347s 00:04:43.632 sys 0m0.820s 00:04:43.632 20:47:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.632 20:47:11 -- common/autotest_common.sh@10 -- # set +x 00:04:43.632 ************************************ 00:04:43.632 END TEST default_setup 00:04:43.632 ************************************ 00:04:43.632 20:47:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.632 20:47:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.632 20:47:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.633 20:47:11 -- common/autotest_common.sh@10 -- # set +x 00:04:43.633 ************************************ 00:04:43.633 START TEST per_node_1G_alloc 00:04:43.633 ************************************ 00:04:43.633 20:47:11 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:43.633 20:47:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.633 20:47:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:43.633 20:47:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.633 20:47:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.633 20:47:11 -- setup/hugepages.sh@51 -- # shift 00:04:43.633 20:47:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.633 20:47:11 -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.633 20:47:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.633 20:47:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.633 20:47:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.633 20:47:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.633 20:47:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.633 20:47:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.633 20:47:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.633 20:47:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.633 20:47:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.633 20:47:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.633 20:47:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.633 20:47:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.633 20:47:11 -- setup/hugepages.sh@73 -- # return 0 00:04:43.633 20:47:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.633 20:47:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:43.633 20:47:11 -- setup/hugepages.sh@146 -- # setup output 00:04:43.633 20:47:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.633 20:47:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:43.891 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.149 20:47:12 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:44.149 20:47:12 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:44.149 20:47:12 -- setup/hugepages.sh@89 -- # local node 00:04:44.149 20:47:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.149 20:47:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.149 20:47:12 -- setup/hugepages.sh@92 -- # local surp 00:04:44.149 20:47:12 -- setup/hugepages.sh@93 -- # local resv 00:04:44.149 20:47:12 -- setup/hugepages.sh@94 -- # local anon 00:04:44.149 20:47:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.149 20:47:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.149 20:47:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.149 20:47:12 -- setup/common.sh@18 -- # local node= 00:04:44.149 20:47:12 -- setup/common.sh@19 -- # local var val 00:04:44.149 20:47:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.149 20:47:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.149 20:47:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.149 20:47:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.149 20:47:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.149 20:47:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6123152 kB' 'MemAvailable: 10535168 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995396 kB' 'Inactive: 3688764 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 143244 kB' 'Active(file): 994340 kB' 'Inactive(file): 3545520 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 162264 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257284 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63612 kB' 'KernelStack: 4436 kB' 'PageTables: 3908 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.149 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.149 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.150 20:47:12 -- setup/common.sh@33 -- # echo 0 00:04:44.150 20:47:12 -- setup/common.sh@33 -- # return 0 00:04:44.150 20:47:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.150 20:47:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.150 20:47:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.150 20:47:12 -- setup/common.sh@18 -- # local node= 00:04:44.150 20:47:12 -- setup/common.sh@19 -- # local var val 00:04:44.150 20:47:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.150 20:47:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.150 20:47:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.150 20:47:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.150 20:47:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.150 20:47:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.150 20:47:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6122900 kB' 'MemAvailable: 10534912 kB' 'Buffers: 35528 kB' 'Cached: 4515988 kB' 'SwapCached: 0 kB' 'Active: 995404 kB' 'Inactive: 3688528 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 143016 kB' 'Active(file): 994344 kB' 'Inactive(file): 3545512 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161788 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257312 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63640 kB' 'KernelStack: 4388 kB' 'PageTables: 3808 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.150 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.150 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.151 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.151 20:47:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.411 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.411 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.412 20:47:12 -- setup/common.sh@33 -- # echo 0 00:04:44.412 20:47:12 -- setup/common.sh@33 -- # return 0 00:04:44.412 20:47:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.412 20:47:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.412 20:47:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.412 20:47:12 -- setup/common.sh@18 -- # local node= 00:04:44.412 20:47:12 -- setup/common.sh@19 -- # local var val 00:04:44.412 20:47:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.412 20:47:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.412 20:47:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.412 20:47:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.412 20:47:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.412 20:47:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6122900 kB' 'MemAvailable: 10534912 kB' 'Buffers: 35528 kB' 'Cached: 4515988 kB' 'SwapCached: 0 kB' 'Active: 995396 kB' 'Inactive: 3688352 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142840 kB' 'Active(file): 994344 kB' 'Inactive(file): 3545512 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161572 kB' 'Mapped: 67880 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257416 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63744 kB' 'KernelStack: 4396 kB' 'PageTables: 3796 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.412 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.412 20:47:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.413 20:47:12 -- setup/common.sh@33 -- # echo 0 00:04:44.413 20:47:12 -- setup/common.sh@33 -- # return 0 00:04:44.413 20:47:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.413 nr_hugepages=512 00:04:44.413 20:47:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:44.413 resv_hugepages=0 00:04:44.413 20:47:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.413 surplus_hugepages=0 00:04:44.413 20:47:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.413 anon_hugepages=0 00:04:44.413 20:47:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.413 20:47:12 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:44.413 20:47:12 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:44.413 20:47:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.413 20:47:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.413 20:47:12 -- setup/common.sh@18 -- # local node= 00:04:44.413 20:47:12 -- setup/common.sh@19 -- # local var val 00:04:44.413 20:47:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.413 20:47:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.413 20:47:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.413 20:47:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.413 20:47:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.413 20:47:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6122900 kB' 'MemAvailable: 10534912 kB' 'Buffers: 35528 kB' 'Cached: 4515988 kB' 'SwapCached: 0 kB' 'Active: 995392 kB' 'Inactive: 3688780 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 143268 kB' 'Active(file): 994344 kB' 'Inactive(file): 3545512 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161976 kB' 'Mapped: 67920 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257296 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63624 kB' 'KernelStack: 4464 kB' 'PageTables: 3768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.413 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.413 20:47:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.414 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.414 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.414 20:47:12 -- setup/common.sh@33 -- # echo 512 00:04:44.414 20:47:12 -- setup/common.sh@33 -- # return 0 00:04:44.414 20:47:12 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:44.414 20:47:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.415 20:47:12 -- setup/hugepages.sh@27 -- # local node 00:04:44.415 20:47:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.415 20:47:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.415 20:47:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.415 20:47:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.415 20:47:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.415 20:47:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.415 20:47:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.415 20:47:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.415 20:47:12 -- setup/common.sh@18 -- # local node=0 00:04:44.415 20:47:12 -- setup/common.sh@19 -- # local var val 00:04:44.415 20:47:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.415 20:47:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.415 20:47:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.415 20:47:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.415 20:47:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.415 20:47:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.415 20:47:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6122900 kB' 'MemUsed: 6120064 kB' 'SwapCached: 0 kB' 'Active: 995392 kB' 'Inactive: 3688612 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 143100 kB' 'Active(file): 994344 kB' 'Inactive(file): 3545512 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'FilePages: 4551516 kB' 'Mapped: 67920 kB' 'AnonPages: 161812 kB' 'Shmem: 2596 kB' 'KernelStack: 4448 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193672 kB' 'Slab: 257296 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.415 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.415 20:47:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.416 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.416 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.416 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.416 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.416 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.416 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.416 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.416 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.416 20:47:12 -- setup/common.sh@32 -- # continue 00:04:44.416 20:47:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.416 20:47:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.416 20:47:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.416 20:47:12 -- setup/common.sh@33 -- # echo 0 00:04:44.416 20:47:12 -- setup/common.sh@33 -- # return 0 00:04:44.416 20:47:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.416 20:47:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.416 20:47:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.416 20:47:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.416 node0=512 expecting 512 00:04:44.416 20:47:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.416 20:47:12 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.416 00:04:44.416 real 0m0.729s 00:04:44.416 user 0m0.295s 00:04:44.416 sys 0m0.471s 00:04:44.416 20:47:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.416 20:47:12 -- common/autotest_common.sh@10 -- # set +x 00:04:44.416 ************************************ 00:04:44.416 END TEST per_node_1G_alloc 00:04:44.416 ************************************ 00:04:44.416 20:47:12 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:44.416 20:47:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.416 20:47:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.416 20:47:12 -- common/autotest_common.sh@10 -- # set +x 00:04:44.416 ************************************ 00:04:44.416 START TEST even_2G_alloc 00:04:44.416 ************************************ 00:04:44.416 20:47:12 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:44.416 20:47:12 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:44.416 20:47:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.416 20:47:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.416 20:47:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.416 20:47:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.416 20:47:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.416 20:47:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.416 20:47:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.416 20:47:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.416 20:47:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.416 20:47:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.416 20:47:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.416 20:47:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.416 20:47:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.416 20:47:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.416 20:47:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:44.416 20:47:12 -- setup/hugepages.sh@83 -- # : 0 00:04:44.416 20:47:12 -- setup/hugepages.sh@84 -- # : 0 00:04:44.416 20:47:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.416 20:47:12 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:44.416 20:47:12 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:44.416 20:47:12 -- setup/hugepages.sh@153 -- # setup output 00:04:44.416 20:47:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.416 20:47:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:44.674 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.243 20:47:13 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:45.243 20:47:13 -- setup/hugepages.sh@89 -- # local node 00:04:45.243 20:47:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.243 20:47:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.243 20:47:13 -- setup/hugepages.sh@92 -- # local surp 00:04:45.243 20:47:13 -- setup/hugepages.sh@93 -- # local resv 00:04:45.243 20:47:13 -- setup/hugepages.sh@94 -- # local anon 00:04:45.243 20:47:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.243 20:47:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.243 20:47:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.243 20:47:13 -- setup/common.sh@18 -- # local node= 00:04:45.243 20:47:13 -- setup/common.sh@19 -- # local var val 00:04:45.243 20:47:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.243 20:47:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.243 20:47:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.243 20:47:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.243 20:47:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.243 20:47:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.243 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.243 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073808 kB' 'MemAvailable: 9485824 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995432 kB' 'Inactive: 3688864 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 143368 kB' 'Active(file): 994364 kB' 'Inactive(file): 3545496 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 162316 kB' 'Mapped: 67960 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257152 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63480 kB' 'KernelStack: 4516 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.244 20:47:13 -- setup/common.sh@33 -- # echo 0 00:04:45.244 20:47:13 -- setup/common.sh@33 -- # return 0 00:04:45.244 20:47:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.244 20:47:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.244 20:47:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.244 20:47:13 -- setup/common.sh@18 -- # local node= 00:04:45.244 20:47:13 -- setup/common.sh@19 -- # local var val 00:04:45.244 20:47:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.244 20:47:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.244 20:47:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.244 20:47:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.244 20:47:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.244 20:47:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074516 kB' 'MemAvailable: 9486532 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995428 kB' 'Inactive: 3688324 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142844 kB' 'Active(file): 994380 kB' 'Inactive(file): 3545480 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161712 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257064 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63392 kB' 'KernelStack: 4408 kB' 'PageTables: 3672 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.244 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.244 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.245 20:47:13 -- setup/common.sh@33 -- # echo 0 00:04:45.245 20:47:13 -- setup/common.sh@33 -- # return 0 00:04:45.245 20:47:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.245 20:47:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.245 20:47:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.245 20:47:13 -- setup/common.sh@18 -- # local node= 00:04:45.245 20:47:13 -- setup/common.sh@19 -- # local var val 00:04:45.245 20:47:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.245 20:47:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.245 20:47:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.245 20:47:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.245 20:47:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.245 20:47:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074516 kB' 'MemAvailable: 9486532 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995428 kB' 'Inactive: 3688552 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 143072 kB' 'Active(file): 994380 kB' 'Inactive(file): 3545480 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161680 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257064 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63392 kB' 'KernelStack: 4392 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.245 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.245 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.246 20:47:13 -- setup/common.sh@33 -- # echo 0 00:04:45.246 20:47:13 -- setup/common.sh@33 -- # return 0 00:04:45.246 20:47:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.246 nr_hugepages=1024 00:04:45.246 20:47:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.246 resv_hugepages=0 00:04:45.246 20:47:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.246 surplus_hugepages=0 00:04:45.246 20:47:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.246 anon_hugepages=0 00:04:45.246 20:47:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.246 20:47:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.246 20:47:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.246 20:47:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.246 20:47:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.246 20:47:13 -- setup/common.sh@18 -- # local node= 00:04:45.246 20:47:13 -- setup/common.sh@19 -- # local var val 00:04:45.246 20:47:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.246 20:47:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.246 20:47:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.246 20:47:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.246 20:47:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.246 20:47:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074516 kB' 'MemAvailable: 9486532 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995428 kB' 'Inactive: 3688632 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 143152 kB' 'Active(file): 994380 kB' 'Inactive(file): 3545480 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 161780 kB' 'Mapped: 67904 kB' 'Shmem: 2596 kB' 'KReclaimable: 193672 kB' 'Slab: 257244 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63572 kB' 'KernelStack: 4448 kB' 'PageTables: 3792 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.246 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.246 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.506 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.506 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.507 20:47:13 -- setup/common.sh@33 -- # echo 1024 00:04:45.507 20:47:13 -- setup/common.sh@33 -- # return 0 00:04:45.507 20:47:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.507 20:47:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.507 20:47:13 -- setup/hugepages.sh@27 -- # local node 00:04:45.507 20:47:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.507 20:47:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.507 20:47:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.507 20:47:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.507 20:47:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.507 20:47:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.507 20:47:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.507 20:47:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.507 20:47:13 -- setup/common.sh@18 -- # local node=0 00:04:45.507 20:47:13 -- setup/common.sh@19 -- # local var val 00:04:45.507 20:47:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.507 20:47:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.507 20:47:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.507 20:47:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.507 20:47:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.507 20:47:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.507 20:47:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074516 kB' 'MemUsed: 7168448 kB' 'SwapCached: 0 kB' 'Active: 995428 kB' 'Inactive: 3688372 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142892 kB' 'Active(file): 994380 kB' 'Inactive(file): 3545480 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'FilePages: 4551520 kB' 'Mapped: 67904 kB' 'AnonPages: 161520 kB' 'Shmem: 2596 kB' 'KernelStack: 4448 kB' 'PageTables: 3792 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193672 kB' 'Slab: 257244 kB' 'SReclaimable: 193672 kB' 'SUnreclaim: 63572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.507 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.507 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # continue 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.508 20:47:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.508 20:47:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.508 20:47:13 -- setup/common.sh@33 -- # echo 0 00:04:45.508 20:47:13 -- setup/common.sh@33 -- # return 0 00:04:45.508 20:47:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.508 20:47:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.508 20:47:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.508 20:47:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.508 20:47:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.508 node0=1024 expecting 1024 00:04:45.508 20:47:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.508 00:04:45.508 real 0m0.995s 00:04:45.508 user 0m0.308s 00:04:45.508 sys 0m0.728s 00:04:45.508 20:47:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.508 20:47:13 -- common/autotest_common.sh@10 -- # set +x 00:04:45.508 ************************************ 00:04:45.508 END TEST even_2G_alloc 00:04:45.508 ************************************ 00:04:45.508 20:47:13 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:45.508 20:47:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.508 20:47:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.508 20:47:13 -- common/autotest_common.sh@10 -- # set +x 00:04:45.508 ************************************ 00:04:45.508 START TEST odd_alloc 00:04:45.508 ************************************ 00:04:45.508 20:47:13 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:45.509 20:47:13 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:45.509 20:47:13 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:45.509 20:47:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.509 20:47:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.509 20:47:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:45.509 20:47:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.509 20:47:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.509 20:47:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.509 20:47:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:45.509 20:47:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.509 20:47:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.509 20:47:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.509 20:47:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.509 20:47:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.509 20:47:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.509 20:47:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:45.509 20:47:13 -- setup/hugepages.sh@83 -- # : 0 00:04:45.509 20:47:13 -- setup/hugepages.sh@84 -- # : 0 00:04:45.509 20:47:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.509 20:47:13 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:45.509 20:47:13 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:45.509 20:47:13 -- setup/hugepages.sh@160 -- # setup output 00:04:45.509 20:47:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.509 20:47:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:45.767 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.339 20:47:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:46.339 20:47:14 -- setup/hugepages.sh@89 -- # local node 00:04:46.339 20:47:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.339 20:47:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.339 20:47:14 -- setup/hugepages.sh@92 -- # local surp 00:04:46.339 20:47:14 -- setup/hugepages.sh@93 -- # local resv 00:04:46.339 20:47:14 -- setup/hugepages.sh@94 -- # local anon 00:04:46.339 20:47:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.339 20:47:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.339 20:47:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.339 20:47:14 -- setup/common.sh@18 -- # local node= 00:04:46.339 20:47:14 -- setup/common.sh@19 -- # local var val 00:04:46.339 20:47:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.339 20:47:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.339 20:47:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.339 20:47:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.339 20:47:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.339 20:47:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5071348 kB' 'MemAvailable: 9483380 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995444 kB' 'Inactive: 3688856 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 143384 kB' 'Active(file): 994388 kB' 'Inactive(file): 3545472 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 4 kB' 'AnonPages: 161928 kB' 'Mapped: 67956 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 256972 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63284 kB' 'KernelStack: 4404 kB' 'PageTables: 3724 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 506608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.339 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.339 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.340 20:47:14 -- setup/common.sh@33 -- # echo 0 00:04:46.340 20:47:14 -- setup/common.sh@33 -- # return 0 00:04:46.340 20:47:14 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.340 20:47:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.340 20:47:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.340 20:47:14 -- setup/common.sh@18 -- # local node= 00:04:46.340 20:47:14 -- setup/common.sh@19 -- # local var val 00:04:46.340 20:47:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.340 20:47:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.340 20:47:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.340 20:47:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.340 20:47:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.340 20:47:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073936 kB' 'MemAvailable: 9485968 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995436 kB' 'Inactive: 3685608 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 140136 kB' 'Active(file): 994388 kB' 'Inactive(file): 3545472 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 4 kB' 'AnonPages: 158776 kB' 'Mapped: 67436 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 256988 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63300 kB' 'KernelStack: 4396 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.340 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.340 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.341 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.341 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.342 20:47:14 -- setup/common.sh@33 -- # echo 0 00:04:46.342 20:47:14 -- setup/common.sh@33 -- # return 0 00:04:46.342 20:47:14 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.342 20:47:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.342 20:47:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.342 20:47:14 -- setup/common.sh@18 -- # local node= 00:04:46.342 20:47:14 -- setup/common.sh@19 -- # local var val 00:04:46.342 20:47:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.342 20:47:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.342 20:47:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.342 20:47:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.342 20:47:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.342 20:47:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073936 kB' 'MemAvailable: 9485968 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995436 kB' 'Inactive: 3685392 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139920 kB' 'Active(file): 994388 kB' 'Inactive(file): 3545472 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 4 kB' 'AnonPages: 158576 kB' 'Mapped: 67436 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 256988 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63300 kB' 'KernelStack: 4396 kB' 'PageTables: 3828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.342 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.342 20:47:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.343 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.343 20:47:14 -- setup/common.sh@33 -- # echo 0 00:04:46.343 20:47:14 -- setup/common.sh@33 -- # return 0 00:04:46.343 20:47:14 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.343 nr_hugepages=1025 00:04:46.343 20:47:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:46.343 resv_hugepages=0 00:04:46.343 20:47:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.343 surplus_hugepages=0 00:04:46.343 20:47:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.343 anon_hugepages=0 00:04:46.343 20:47:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.343 20:47:14 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.343 20:47:14 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:46.343 20:47:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.343 20:47:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.343 20:47:14 -- setup/common.sh@18 -- # local node= 00:04:46.343 20:47:14 -- setup/common.sh@19 -- # local var val 00:04:46.343 20:47:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.343 20:47:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.343 20:47:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.343 20:47:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.343 20:47:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.343 20:47:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.343 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073872 kB' 'MemAvailable: 9485904 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995436 kB' 'Inactive: 3685208 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139736 kB' 'Active(file): 994388 kB' 'Inactive(file): 3545472 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 158624 kB' 'Mapped: 67380 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 256988 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63300 kB' 'KernelStack: 4400 kB' 'PageTables: 3648 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.344 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.344 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.345 20:47:14 -- setup/common.sh@33 -- # echo 1025 00:04:46.345 20:47:14 -- setup/common.sh@33 -- # return 0 00:04:46.345 20:47:14 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:46.345 20:47:14 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.345 20:47:14 -- setup/hugepages.sh@27 -- # local node 00:04:46.345 20:47:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.345 20:47:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:46.345 20:47:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.345 20:47:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.345 20:47:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.345 20:47:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.345 20:47:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.345 20:47:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.345 20:47:14 -- setup/common.sh@18 -- # local node=0 00:04:46.345 20:47:14 -- setup/common.sh@19 -- # local var val 00:04:46.345 20:47:14 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.345 20:47:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.345 20:47:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.345 20:47:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.345 20:47:14 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.345 20:47:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.345 20:47:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073872 kB' 'MemUsed: 7169092 kB' 'SwapCached: 0 kB' 'Active: 995436 kB' 'Inactive: 3685208 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139736 kB' 'Active(file): 994388 kB' 'Inactive(file): 3545472 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'FilePages: 4551520 kB' 'Mapped: 67380 kB' 'AnonPages: 158364 kB' 'Shmem: 2596 kB' 'KernelStack: 4468 kB' 'PageTables: 3648 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193688 kB' 'Slab: 256988 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.345 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.345 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # continue 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.346 20:47:14 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.346 20:47:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.346 20:47:14 -- setup/common.sh@33 -- # echo 0 00:04:46.346 20:47:14 -- setup/common.sh@33 -- # return 0 00:04:46.346 20:47:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.346 20:47:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.346 20:47:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.346 20:47:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.346 node0=1025 expecting 1025 00:04:46.346 20:47:14 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:46.346 20:47:14 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:46.346 00:04:46.346 real 0m0.981s 00:04:46.346 user 0m0.302s 00:04:46.346 sys 0m0.718s 00:04:46.346 20:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.346 20:47:14 -- common/autotest_common.sh@10 -- # set +x 00:04:46.346 ************************************ 00:04:46.346 END TEST odd_alloc 00:04:46.346 ************************************ 00:04:46.606 20:47:14 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:46.606 20:47:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.606 20:47:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.606 20:47:14 -- common/autotest_common.sh@10 -- # set +x 00:04:46.606 ************************************ 00:04:46.606 START TEST custom_alloc 00:04:46.606 ************************************ 00:04:46.606 20:47:14 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:46.606 20:47:14 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:46.606 20:47:14 -- setup/hugepages.sh@169 -- # local node 00:04:46.606 20:47:14 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:46.606 20:47:14 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:46.606 20:47:14 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:46.606 20:47:14 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:46.606 20:47:14 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:46.606 20:47:14 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:46.606 20:47:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.606 20:47:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.606 20:47:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.606 20:47:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:46.606 20:47:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.606 20:47:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.606 20:47:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.606 20:47:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.606 20:47:14 -- setup/hugepages.sh@83 -- # : 0 00:04:46.606 20:47:14 -- setup/hugepages.sh@84 -- # : 0 00:04:46.606 20:47:14 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:46.606 20:47:14 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:46.606 20:47:14 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:46.606 20:47:14 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:46.606 20:47:14 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.606 20:47:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.606 20:47:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:46.606 20:47:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.606 20:47:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.606 20:47:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.606 20:47:14 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:46.606 20:47:14 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:46.606 20:47:14 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:46.606 20:47:14 -- setup/hugepages.sh@78 -- # return 0 00:04:46.606 20:47:14 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:46.606 20:47:14 -- setup/hugepages.sh@187 -- # setup output 00:04:46.606 20:47:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.606 20:47:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:46.865 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.127 20:47:15 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:47.127 20:47:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:47.127 20:47:15 -- setup/hugepages.sh@89 -- # local node 00:04:47.127 20:47:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.127 20:47:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.127 20:47:15 -- setup/hugepages.sh@92 -- # local surp 00:04:47.127 20:47:15 -- setup/hugepages.sh@93 -- # local resv 00:04:47.127 20:47:15 -- setup/hugepages.sh@94 -- # local anon 00:04:47.127 20:47:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.127 20:47:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.127 20:47:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.127 20:47:15 -- setup/common.sh@18 -- # local node= 00:04:47.127 20:47:15 -- setup/common.sh@19 -- # local var val 00:04:47.127 20:47:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.127 20:47:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.127 20:47:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.127 20:47:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.127 20:47:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.127 20:47:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.127 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.127 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.127 20:47:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6123624 kB' 'MemAvailable: 10535656 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995456 kB' 'Inactive: 3685632 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 140172 kB' 'Active(file): 994400 kB' 'Inactive(file): 3545460 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 159096 kB' 'Mapped: 67404 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 256992 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63304 kB' 'KernelStack: 4464 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.127 20:47:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.127 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.127 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.127 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.127 20:47:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.127 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.127 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.127 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.127 20:47:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.127 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.128 20:47:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.128 20:47:15 -- setup/common.sh@33 -- # echo 0 00:04:47.128 20:47:15 -- setup/common.sh@33 -- # return 0 00:04:47.128 20:47:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.128 20:47:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.128 20:47:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.128 20:47:15 -- setup/common.sh@18 -- # local node= 00:04:47.128 20:47:15 -- setup/common.sh@19 -- # local var val 00:04:47.128 20:47:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.128 20:47:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.128 20:47:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.128 20:47:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.128 20:47:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.128 20:47:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.128 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6123624 kB' 'MemAvailable: 10535656 kB' 'Buffers: 35528 kB' 'Cached: 4515992 kB' 'SwapCached: 0 kB' 'Active: 995456 kB' 'Inactive: 3685756 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 140296 kB' 'Active(file): 994400 kB' 'Inactive(file): 3545460 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 159172 kB' 'Mapped: 67404 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 256992 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63304 kB' 'KernelStack: 4432 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.129 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.129 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.130 20:47:15 -- setup/common.sh@33 -- # echo 0 00:04:47.130 20:47:15 -- setup/common.sh@33 -- # return 0 00:04:47.130 20:47:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:47.130 20:47:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.130 20:47:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.130 20:47:15 -- setup/common.sh@18 -- # local node= 00:04:47.130 20:47:15 -- setup/common.sh@19 -- # local var val 00:04:47.130 20:47:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.130 20:47:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.130 20:47:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.130 20:47:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.130 20:47:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.130 20:47:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6123832 kB' 'MemAvailable: 10535864 kB' 'Buffers: 35528 kB' 'Cached: 4515996 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3685148 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 139688 kB' 'Active(file): 994400 kB' 'Inactive(file): 3545460 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 158292 kB' 'Mapped: 67364 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 256928 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63240 kB' 'KernelStack: 4280 kB' 'PageTables: 3180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.130 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.130 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.131 20:47:15 -- setup/common.sh@33 -- # echo 0 00:04:47.131 20:47:15 -- setup/common.sh@33 -- # return 0 00:04:47.131 20:47:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:47.131 nr_hugepages=512 00:04:47.131 20:47:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:47.131 resv_hugepages=0 00:04:47.131 20:47:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.131 surplus_hugepages=0 00:04:47.131 20:47:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.131 anon_hugepages=0 00:04:47.131 20:47:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.131 20:47:15 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.131 20:47:15 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:47.131 20:47:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.131 20:47:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.131 20:47:15 -- setup/common.sh@18 -- # local node= 00:04:47.131 20:47:15 -- setup/common.sh@19 -- # local var val 00:04:47.131 20:47:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.131 20:47:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.131 20:47:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.131 20:47:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.131 20:47:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.131 20:47:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6124064 kB' 'MemAvailable: 10536096 kB' 'Buffers: 35528 kB' 'Cached: 4515996 kB' 'SwapCached: 0 kB' 'Active: 995456 kB' 'Inactive: 3684944 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 139484 kB' 'Active(file): 994400 kB' 'Inactive(file): 3545460 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'AnonPages: 158408 kB' 'Mapped: 67364 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 257024 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63336 kB' 'KernelStack: 4332 kB' 'PageTables: 3232 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.131 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.131 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.132 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.132 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.132 20:47:15 -- setup/common.sh@33 -- # echo 512 00:04:47.132 20:47:15 -- setup/common.sh@33 -- # return 0 00:04:47.132 20:47:15 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:47.132 20:47:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.132 20:47:15 -- setup/hugepages.sh@27 -- # local node 00:04:47.132 20:47:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.132 20:47:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.133 20:47:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.133 20:47:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.133 20:47:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.133 20:47:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.133 20:47:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.133 20:47:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.133 20:47:15 -- setup/common.sh@18 -- # local node=0 00:04:47.133 20:47:15 -- setup/common.sh@19 -- # local var val 00:04:47.133 20:47:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.133 20:47:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.133 20:47:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.133 20:47:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.133 20:47:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.133 20:47:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 6124064 kB' 'MemUsed: 6118900 kB' 'SwapCached: 0 kB' 'Active: 995456 kB' 'Inactive: 3684932 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 139472 kB' 'Active(file): 994400 kB' 'Inactive(file): 3545460 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'Dirty: 660 kB' 'Writeback: 0 kB' 'FilePages: 4551524 kB' 'Mapped: 67364 kB' 'AnonPages: 158136 kB' 'Shmem: 2596 kB' 'KernelStack: 4316 kB' 'PageTables: 3456 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193688 kB' 'Slab: 257024 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.133 20:47:15 -- setup/common.sh@32 -- # continue 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.133 20:47:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.134 20:47:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.134 20:47:15 -- setup/common.sh@33 -- # echo 0 00:04:47.134 20:47:15 -- setup/common.sh@33 -- # return 0 00:04:47.134 20:47:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.134 20:47:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.134 20:47:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.134 20:47:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.134 node0=512 expecting 512 00:04:47.134 20:47:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.134 20:47:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.134 00:04:47.134 real 0m0.726s 00:04:47.134 user 0m0.301s 00:04:47.134 sys 0m0.464s 00:04:47.134 20:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.134 20:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.134 ************************************ 00:04:47.134 END TEST custom_alloc 00:04:47.134 ************************************ 00:04:47.393 20:47:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:47.393 20:47:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.393 20:47:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.393 20:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.393 ************************************ 00:04:47.393 START TEST no_shrink_alloc 00:04:47.393 ************************************ 00:04:47.393 20:47:15 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:47.393 20:47:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:47.393 20:47:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.393 20:47:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:47.393 20:47:15 -- setup/hugepages.sh@51 -- # shift 00:04:47.393 20:47:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:47.393 20:47:15 -- setup/hugepages.sh@52 -- # local node_ids 00:04:47.393 20:47:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.393 20:47:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.393 20:47:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:47.393 20:47:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:47.393 20:47:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.393 20:47:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.393 20:47:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:47.393 20:47:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.393 20:47:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.393 20:47:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:47.393 20:47:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:47.393 20:47:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:47.393 20:47:15 -- setup/hugepages.sh@73 -- # return 0 00:04:47.393 20:47:15 -- setup/hugepages.sh@198 -- # setup output 00:04:47.393 20:47:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.393 20:47:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:47.652 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.223 20:47:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:48.223 20:47:16 -- setup/hugepages.sh@89 -- # local node 00:04:48.223 20:47:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.223 20:47:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.223 20:47:16 -- setup/hugepages.sh@92 -- # local surp 00:04:48.223 20:47:16 -- setup/hugepages.sh@93 -- # local resv 00:04:48.223 20:47:16 -- setup/hugepages.sh@94 -- # local anon 00:04:48.223 20:47:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.223 20:47:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.223 20:47:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.223 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.223 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.223 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.223 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.223 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.223 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.223 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.223 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074332 kB' 'MemAvailable: 9486376 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3685356 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139892 kB' 'Active(file): 994408 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 158496 kB' 'Mapped: 67184 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 257188 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63500 kB' 'KernelStack: 4400 kB' 'PageTables: 3332 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.223 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.223 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.224 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.224 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.224 20:47:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:48.224 20:47:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.224 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.224 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.224 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.224 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.224 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.224 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.224 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.224 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.224 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074568 kB' 'MemAvailable: 9486612 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3684896 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139432 kB' 'Active(file): 994408 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 158124 kB' 'Mapped: 67168 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 257244 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63556 kB' 'KernelStack: 4280 kB' 'PageTables: 3236 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.224 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.224 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.225 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.225 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.225 20:47:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.225 20:47:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.225 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.225 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.225 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.225 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.225 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.225 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.225 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.225 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.225 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074552 kB' 'MemAvailable: 9486596 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3684792 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139328 kB' 'Active(file): 994408 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 158024 kB' 'Mapped: 67184 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 257244 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63556 kB' 'KernelStack: 4320 kB' 'PageTables: 3344 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.225 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.225 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.226 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.226 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.226 nr_hugepages=1024 00:04:48.226 resv_hugepages=0 00:04:48.226 surplus_hugepages=0 00:04:48.226 anon_hugepages=0 00:04:48.226 20:47:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.226 20:47:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.226 20:47:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.226 20:47:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.226 20:47:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.226 20:47:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.226 20:47:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.226 20:47:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.226 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.226 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.226 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.226 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.226 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.226 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.226 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.226 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.226 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.226 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.226 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5074572 kB' 'MemAvailable: 9486616 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3684792 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139328 kB' 'Active(file): 994408 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 157764 kB' 'Mapped: 67184 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 257244 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63556 kB' 'KernelStack: 4388 kB' 'PageTables: 3344 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.227 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.227 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.228 20:47:16 -- setup/common.sh@33 -- # echo 1024 00:04:48.228 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.228 20:47:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.228 20:47:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.228 20:47:16 -- setup/hugepages.sh@27 -- # local node 00:04:48.228 20:47:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.228 20:47:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.228 20:47:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.228 20:47:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.228 20:47:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.228 20:47:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.228 20:47:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.228 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.228 20:47:16 -- setup/common.sh@18 -- # local node=0 00:04:48.228 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.228 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.228 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.228 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.228 20:47:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.228 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.228 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5075076 kB' 'MemUsed: 7167888 kB' 'SwapCached: 0 kB' 'Active: 995456 kB' 'Inactive: 3684768 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139300 kB' 'Active(file): 994408 kB' 'Inactive(file): 3545468 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'FilePages: 4551536 kB' 'Mapped: 67164 kB' 'AnonPages: 158004 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3328 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193688 kB' 'Slab: 257228 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.228 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.228 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.229 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.229 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.229 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.229 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.229 node0=1024 expecting 1024 00:04:48.229 20:47:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.229 20:47:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.229 20:47:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.229 20:47:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.229 20:47:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.229 20:47:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.229 20:47:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:48.229 20:47:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:48.229 20:47:16 -- setup/hugepages.sh@202 -- # setup output 00:04:48.229 20:47:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.229 20:47:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:48.488 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:48.488 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:48.488 20:47:16 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:48.488 20:47:16 -- setup/hugepages.sh@89 -- # local node 00:04:48.488 20:47:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.488 20:47:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.488 20:47:16 -- setup/hugepages.sh@92 -- # local surp 00:04:48.488 20:47:16 -- setup/hugepages.sh@93 -- # local resv 00:04:48.488 20:47:16 -- setup/hugepages.sh@94 -- # local anon 00:04:48.488 20:47:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.488 20:47:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.488 20:47:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.488 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.488 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.488 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.488 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.488 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.488 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.488 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.488 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.488 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073108 kB' 'MemAvailable: 9485156 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995476 kB' 'Inactive: 3685956 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 140492 kB' 'Active(file): 994412 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 159172 kB' 'Mapped: 67220 kB' 'Shmem: 2596 kB' 'KReclaimable: 193688 kB' 'Slab: 257396 kB' 'SReclaimable: 193688 kB' 'SUnreclaim: 63708 kB' 'KernelStack: 4456 kB' 'PageTables: 3836 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.489 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.489 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.751 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.751 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.751 20:47:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:48.751 20:47:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.751 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.751 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.751 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.751 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.751 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.751 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.751 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.751 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.751 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073572 kB' 'MemAvailable: 9485636 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3684924 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139460 kB' 'Active(file): 994412 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 158296 kB' 'Mapped: 67164 kB' 'Shmem: 2596 kB' 'KReclaimable: 193704 kB' 'Slab: 257276 kB' 'SReclaimable: 193704 kB' 'SUnreclaim: 63572 kB' 'KernelStack: 4316 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.751 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.751 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.752 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.752 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.752 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.752 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.752 20:47:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:48.752 20:47:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.752 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.752 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.752 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.752 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.752 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.752 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.752 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.753 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.753 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073572 kB' 'MemAvailable: 9485636 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3684992 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139528 kB' 'Active(file): 994412 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 158352 kB' 'Mapped: 67164 kB' 'Shmem: 2596 kB' 'KReclaimable: 193704 kB' 'Slab: 257276 kB' 'SReclaimable: 193704 kB' 'SUnreclaim: 63572 kB' 'KernelStack: 4272 kB' 'PageTables: 3292 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 498128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.753 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.753 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.754 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.754 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.754 20:47:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:48.754 20:47:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.754 nr_hugepages=1024 00:04:48.754 20:47:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.754 resv_hugepages=0 00:04:48.754 20:47:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.754 surplus_hugepages=0 00:04:48.754 20:47:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.754 anon_hugepages=0 00:04:48.754 20:47:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.754 20:47:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.754 20:47:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.754 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.754 20:47:16 -- setup/common.sh@18 -- # local node= 00:04:48.754 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.754 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.754 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.754 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.754 20:47:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.754 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.754 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073824 kB' 'MemAvailable: 9485888 kB' 'Buffers: 35536 kB' 'Cached: 4516000 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3684972 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139508 kB' 'Active(file): 994412 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'AnonPages: 158104 kB' 'Mapped: 67164 kB' 'Shmem: 2596 kB' 'KReclaimable: 193704 kB' 'Slab: 257276 kB' 'SReclaimable: 193704 kB' 'SUnreclaim: 63572 kB' 'KernelStack: 4304 kB' 'PageTables: 3368 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 497908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.754 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.754 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.755 20:47:16 -- setup/common.sh@33 -- # echo 1024 00:04:48.755 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.755 20:47:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.755 20:47:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.755 20:47:16 -- setup/hugepages.sh@27 -- # local node 00:04:48.755 20:47:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.755 20:47:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.755 20:47:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.755 20:47:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.755 20:47:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.755 20:47:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.755 20:47:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.755 20:47:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.755 20:47:16 -- setup/common.sh@18 -- # local node=0 00:04:48.755 20:47:16 -- setup/common.sh@19 -- # local var val 00:04:48.755 20:47:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:48.755 20:47:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.755 20:47:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.755 20:47:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.755 20:47:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.755 20:47:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 5073824 kB' 'MemUsed: 7169140 kB' 'SwapCached: 0 kB' 'Active: 995460 kB' 'Inactive: 3685148 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139684 kB' 'Active(file): 994412 kB' 'Inactive(file): 3545464 kB' 'Unevictable: 29164 kB' 'Mlocked: 27628 kB' 'Dirty: 808 kB' 'Writeback: 0 kB' 'FilePages: 4551536 kB' 'Mapped: 67164 kB' 'AnonPages: 158328 kB' 'Shmem: 2596 kB' 'KernelStack: 4388 kB' 'PageTables: 3416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 193704 kB' 'Slab: 257276 kB' 'SReclaimable: 193704 kB' 'SUnreclaim: 63572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.755 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.755 20:47:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # continue 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:48.756 20:47:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:48.756 20:47:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.756 20:47:16 -- setup/common.sh@33 -- # echo 0 00:04:48.756 20:47:16 -- setup/common.sh@33 -- # return 0 00:04:48.756 20:47:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.756 20:47:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.756 20:47:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.756 20:47:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.756 20:47:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.756 node0=1024 expecting 1024 00:04:48.756 20:47:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.756 00:04:48.756 real 0m1.560s 00:04:48.756 user 0m0.620s 00:04:48.756 sys 0m0.920s 00:04:48.756 20:47:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.756 20:47:16 -- common/autotest_common.sh@10 -- # set +x 00:04:48.756 ************************************ 00:04:48.756 END TEST no_shrink_alloc 00:04:48.756 ************************************ 00:04:48.756 20:47:16 -- setup/hugepages.sh@217 -- # clear_hp 00:04:48.756 20:47:16 -- setup/hugepages.sh@37 -- # local node hp 00:04:48.756 20:47:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.756 20:47:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.756 20:47:16 -- setup/hugepages.sh@41 -- # echo 0 00:04:48.756 20:47:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.756 20:47:16 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.015 20:47:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.015 20:47:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.015 ************************************ 00:04:49.015 END TEST hugepages 00:04:49.015 ************************************ 00:04:49.015 00:04:49.015 real 0m6.598s 00:04:49.015 user 0m2.421s 00:04:49.016 sys 0m4.302s 00:04:49.016 20:47:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.016 20:47:16 -- common/autotest_common.sh@10 -- # set +x 00:04:49.016 20:47:16 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:49.016 20:47:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.016 20:47:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.016 20:47:16 -- common/autotest_common.sh@10 -- # set +x 00:04:49.016 ************************************ 00:04:49.016 START TEST driver 00:04:49.016 ************************************ 00:04:49.016 20:47:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:49.016 * Looking for test storage... 00:04:49.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.016 20:47:17 -- setup/driver.sh@68 -- # setup reset 00:04:49.016 20:47:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.016 20:47:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.586 20:47:17 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:49.586 20:47:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.586 20:47:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.586 20:47:17 -- common/autotest_common.sh@10 -- # set +x 00:04:49.586 ************************************ 00:04:49.586 START TEST guess_driver 00:04:49.586 ************************************ 00:04:49.586 20:47:17 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:49.586 20:47:17 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:49.586 20:47:17 -- setup/driver.sh@47 -- # local fail=0 00:04:49.586 20:47:17 -- setup/driver.sh@49 -- # pick_driver 00:04:49.586 20:47:17 -- setup/driver.sh@36 -- # vfio 00:04:49.586 20:47:17 -- setup/driver.sh@21 -- # local iommu_grups 00:04:49.587 20:47:17 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:49.587 20:47:17 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:49.587 20:47:17 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:49.587 20:47:17 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:49.587 20:47:17 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:49.587 20:47:17 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:49.587 20:47:17 -- setup/driver.sh@32 -- # return 1 00:04:49.587 20:47:17 -- setup/driver.sh@38 -- # uio 00:04:49.587 20:47:17 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:49.587 20:47:17 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:49.587 20:47:17 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:49.587 20:47:17 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:49.587 20:47:17 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:04:49.587 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:49.587 20:47:17 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:49.587 Looking for driver=uio_pci_generic 00:04:49.587 20:47:17 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:49.587 20:47:17 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:49.587 20:47:17 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:49.587 20:47:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.587 20:47:17 -- setup/driver.sh@45 -- # setup output config 00:04:49.587 20:47:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.587 20:47:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.845 20:47:17 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:49.845 20:47:17 -- setup/driver.sh@58 -- # continue 00:04:49.845 20:47:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.845 20:47:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.845 20:47:17 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:49.845 20:47:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.220 20:47:18 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:51.220 20:47:18 -- setup/driver.sh@65 -- # setup reset 00:04:51.220 20:47:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.220 20:47:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.478 ************************************ 00:04:51.478 END TEST guess_driver 00:04:51.478 ************************************ 00:04:51.478 00:04:51.478 real 0m1.941s 00:04:51.478 user 0m0.484s 00:04:51.478 sys 0m1.464s 00:04:51.478 20:47:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.478 20:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:51.478 ************************************ 00:04:51.478 END TEST driver 00:04:51.478 ************************************ 00:04:51.478 00:04:51.478 real 0m2.495s 00:04:51.478 user 0m0.763s 00:04:51.478 sys 0m1.760s 00:04:51.478 20:47:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.478 20:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:51.478 20:47:19 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.478 20:47:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.478 20:47:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.478 20:47:19 -- common/autotest_common.sh@10 -- # set +x 00:04:51.478 ************************************ 00:04:51.478 START TEST devices 00:04:51.478 ************************************ 00:04:51.478 20:47:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.478 * Looking for test storage... 00:04:51.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.478 20:47:19 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:51.479 20:47:19 -- setup/devices.sh@192 -- # setup reset 00:04:51.479 20:47:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.479 20:47:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.046 20:47:20 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:52.046 20:47:20 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:52.046 20:47:20 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:52.046 20:47:20 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:52.046 20:47:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:52.046 20:47:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:52.046 20:47:20 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:52.046 20:47:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.046 20:47:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:52.046 20:47:20 -- setup/devices.sh@196 -- # blocks=() 00:04:52.046 20:47:20 -- setup/devices.sh@196 -- # declare -a blocks 00:04:52.046 20:47:20 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:52.046 20:47:20 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:52.046 20:47:20 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:52.046 20:47:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.046 20:47:20 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:52.046 20:47:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.046 20:47:20 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:52.046 20:47:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:52.046 20:47:20 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:52.046 20:47:20 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:52.046 20:47:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:52.046 No valid GPT data, bailing 00:04:52.046 20:47:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.046 20:47:20 -- scripts/common.sh@393 -- # pt= 00:04:52.046 20:47:20 -- scripts/common.sh@394 -- # return 1 00:04:52.046 20:47:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:52.046 20:47:20 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:52.046 20:47:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:52.046 20:47:20 -- setup/common.sh@80 -- # echo 5368709120 00:04:52.046 20:47:20 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:52.046 20:47:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.046 20:47:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:52.046 20:47:20 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:52.046 20:47:20 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:52.046 20:47:20 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.046 20:47:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.046 20:47:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.046 20:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:52.046 ************************************ 00:04:52.046 START TEST nvme_mount 00:04:52.046 ************************************ 00:04:52.046 20:47:20 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:52.046 20:47:20 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:52.046 20:47:20 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:52.046 20:47:20 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.046 20:47:20 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.046 20:47:20 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:52.046 20:47:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.046 20:47:20 -- setup/common.sh@40 -- # local part_no=1 00:04:52.046 20:47:20 -- setup/common.sh@41 -- # local size=1073741824 00:04:52.046 20:47:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.046 20:47:20 -- setup/common.sh@44 -- # parts=() 00:04:52.046 20:47:20 -- setup/common.sh@44 -- # local parts 00:04:52.046 20:47:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.046 20:47:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.046 20:47:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.046 20:47:20 -- setup/common.sh@46 -- # (( part++ )) 00:04:52.046 20:47:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.046 20:47:20 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:52.046 20:47:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.046 20:47:20 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.422 Creating new GPT entries in memory. 00:04:53.422 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.422 other utilities. 00:04:53.422 20:47:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.422 20:47:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.422 20:47:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.422 20:47:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.422 20:47:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:54.358 Creating new GPT entries in memory. 00:04:54.358 The operation has completed successfully. 00:04:54.358 20:47:22 -- setup/common.sh@57 -- # (( part++ )) 00:04:54.358 20:47:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.358 20:47:22 -- setup/common.sh@62 -- # wait 96518 00:04:54.358 20:47:22 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.358 20:47:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:54.358 20:47:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.358 20:47:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.358 20:47:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:54.358 20:47:22 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.358 20:47:22 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.358 20:47:22 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:54.358 20:47:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:54.358 20:47:22 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.358 20:47:22 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.358 20:47:22 -- setup/devices.sh@53 -- # local found=0 00:04:54.358 20:47:22 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.358 20:47:22 -- setup/devices.sh@56 -- # : 00:04:54.358 20:47:22 -- setup/devices.sh@59 -- # local pci status 00:04:54.358 20:47:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.358 20:47:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.358 20:47:22 -- setup/devices.sh@47 -- # setup output config 00:04:54.358 20:47:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.358 20:47:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.358 20:47:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.358 20:47:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:54.358 20:47:22 -- setup/devices.sh@63 -- # found=1 00:04:54.358 20:47:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.358 20:47:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.358 20:47:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.358 20:47:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.358 20:47:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.734 20:47:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.734 20:47:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.734 20:47:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.734 20:47:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.734 20:47:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.734 20:47:23 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.734 20:47:23 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.734 20:47:23 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.734 20:47:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.734 20:47:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:55.734 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.734 20:47:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.734 20:47:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.734 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.734 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.734 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.734 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.734 20:47:23 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:55.734 20:47:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:55.734 20:47:23 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.734 20:47:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:55.734 20:47:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:55.734 20:47:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.734 20:47:23 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.734 20:47:23 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:55.734 20:47:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:55.734 20:47:23 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.734 20:47:23 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.734 20:47:23 -- setup/devices.sh@53 -- # local found=0 00:04:55.734 20:47:23 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.734 20:47:23 -- setup/devices.sh@56 -- # : 00:04:55.734 20:47:23 -- setup/devices.sh@59 -- # local pci status 00:04:55.734 20:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.734 20:47:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:55.734 20:47:23 -- setup/devices.sh@47 -- # setup output config 00:04:55.734 20:47:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.734 20:47:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.734 20:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.734 20:47:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:55.734 20:47:23 -- setup/devices.sh@63 -- # found=1 00:04:55.734 20:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.734 20:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.735 20:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.992 20:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.992 20:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.924 20:47:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.924 20:47:25 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:56.924 20:47:25 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.924 20:47:25 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.924 20:47:25 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.925 20:47:25 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.925 20:47:25 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:56.925 20:47:25 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:56.925 20:47:25 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.925 20:47:25 -- setup/devices.sh@50 -- # local mount_point= 00:04:56.925 20:47:25 -- setup/devices.sh@51 -- # local test_file= 00:04:56.925 20:47:25 -- setup/devices.sh@53 -- # local found=0 00:04:56.925 20:47:25 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.925 20:47:25 -- setup/devices.sh@59 -- # local pci status 00:04:56.925 20:47:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.925 20:47:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:56.925 20:47:25 -- setup/devices.sh@47 -- # setup output config 00:04:56.925 20:47:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.925 20:47:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:57.183 20:47:25 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:57.183 20:47:25 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:57.183 20:47:25 -- setup/devices.sh@63 -- # found=1 00:04:57.183 20:47:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.183 20:47:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:57.183 20:47:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.440 20:47:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:57.440 20:47:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.373 20:47:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.373 20:47:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.373 20:47:26 -- setup/devices.sh@68 -- # return 0 00:04:58.373 20:47:26 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:58.373 20:47:26 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.373 20:47:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.373 20:47:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.373 20:47:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.373 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.373 00:04:58.373 real 0m6.328s 00:04:58.373 user 0m0.670s 00:04:58.373 sys 0m3.687s 00:04:58.373 20:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.373 ************************************ 00:04:58.373 END TEST nvme_mount 00:04:58.373 ************************************ 00:04:58.373 20:47:26 -- common/autotest_common.sh@10 -- # set +x 00:04:58.373 20:47:26 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:58.373 20:47:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.373 20:47:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.373 20:47:26 -- common/autotest_common.sh@10 -- # set +x 00:04:58.373 ************************************ 00:04:58.373 START TEST dm_mount 00:04:58.373 ************************************ 00:04:58.373 20:47:26 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:58.373 20:47:26 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:58.373 20:47:26 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:58.373 20:47:26 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:58.373 20:47:26 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:58.373 20:47:26 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.373 20:47:26 -- setup/common.sh@40 -- # local part_no=2 00:04:58.373 20:47:26 -- setup/common.sh@41 -- # local size=1073741824 00:04:58.373 20:47:26 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.373 20:47:26 -- setup/common.sh@44 -- # parts=() 00:04:58.373 20:47:26 -- setup/common.sh@44 -- # local parts 00:04:58.373 20:47:26 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.373 20:47:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.373 20:47:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.373 20:47:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:58.373 20:47:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.373 20:47:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.374 20:47:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:58.374 20:47:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.374 20:47:26 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:58.374 20:47:26 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.374 20:47:26 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:59.750 Creating new GPT entries in memory. 00:04:59.750 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.750 other utilities. 00:04:59.750 20:47:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.750 20:47:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.750 20:47:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.750 20:47:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.750 20:47:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:00.686 Creating new GPT entries in memory. 00:05:00.686 The operation has completed successfully. 00:05:00.686 20:47:28 -- setup/common.sh@57 -- # (( part++ )) 00:05:00.686 20:47:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.686 20:47:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.686 20:47:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.686 20:47:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:01.621 The operation has completed successfully. 00:05:01.621 20:47:29 -- setup/common.sh@57 -- # (( part++ )) 00:05:01.621 20:47:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.621 20:47:29 -- setup/common.sh@62 -- # wait 97002 00:05:01.621 20:47:29 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:01.621 20:47:29 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.621 20:47:29 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:01.621 20:47:29 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:01.621 20:47:29 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:01.621 20:47:29 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.621 20:47:29 -- setup/devices.sh@161 -- # break 00:05:01.621 20:47:29 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.621 20:47:29 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:01.621 20:47:29 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:01.621 20:47:29 -- setup/devices.sh@166 -- # dm=dm-0 00:05:01.621 20:47:29 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:01.621 20:47:29 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:01.621 20:47:29 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.621 20:47:29 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:01.621 20:47:29 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.621 20:47:29 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:01.621 20:47:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:01.621 20:47:29 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.621 20:47:29 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:01.621 20:47:29 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:01.621 20:47:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:01.621 20:47:29 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.621 20:47:29 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:01.621 20:47:29 -- setup/devices.sh@53 -- # local found=0 00:05:01.621 20:47:29 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:01.621 20:47:29 -- setup/devices.sh@56 -- # : 00:05:01.621 20:47:29 -- setup/devices.sh@59 -- # local pci status 00:05:01.622 20:47:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.622 20:47:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:01.622 20:47:29 -- setup/devices.sh@47 -- # setup output config 00:05:01.622 20:47:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.622 20:47:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:01.880 20:47:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:01.880 20:47:29 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:01.880 20:47:29 -- setup/devices.sh@63 -- # found=1 00:05:01.880 20:47:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.880 20:47:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:01.880 20:47:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.880 20:47:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:01.880 20:47:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.255 20:47:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.255 20:47:31 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:03.255 20:47:31 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.255 20:47:31 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.255 20:47:31 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:03.255 20:47:31 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.255 20:47:31 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:03.255 20:47:31 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:03.255 20:47:31 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:03.255 20:47:31 -- setup/devices.sh@50 -- # local mount_point= 00:05:03.255 20:47:31 -- setup/devices.sh@51 -- # local test_file= 00:05:03.255 20:47:31 -- setup/devices.sh@53 -- # local found=0 00:05:03.255 20:47:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:03.255 20:47:31 -- setup/devices.sh@59 -- # local pci status 00:05:03.255 20:47:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.255 20:47:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:03.255 20:47:31 -- setup/devices.sh@47 -- # setup output config 00:05:03.255 20:47:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.255 20:47:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.255 20:47:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.255 20:47:31 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:03.255 20:47:31 -- setup/devices.sh@63 -- # found=1 00:05:03.255 20:47:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.255 20:47:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.255 20:47:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.255 20:47:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:03.255 20:47:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.650 20:47:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.650 20:47:32 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.650 20:47:32 -- setup/devices.sh@68 -- # return 0 00:05:04.650 20:47:32 -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.650 20:47:32 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.650 20:47:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.650 20:47:32 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.650 20:47:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.650 20:47:32 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.650 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.650 20:47:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.650 20:47:32 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.650 00:05:04.650 real 0m6.015s 00:05:04.650 user 0m0.464s 00:05:04.650 sys 0m2.473s 00:05:04.650 20:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.650 ************************************ 00:05:04.650 20:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.650 END TEST dm_mount 00:05:04.650 ************************************ 00:05:04.650 20:47:32 -- setup/devices.sh@1 -- # cleanup 00:05:04.650 20:47:32 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.650 20:47:32 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.650 20:47:32 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.650 20:47:32 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.650 20:47:32 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.650 20:47:32 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.650 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.650 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:04.650 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.650 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.650 20:47:32 -- setup/devices.sh@12 -- # cleanup_dm 00:05:04.650 20:47:32 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.650 20:47:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.650 20:47:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.650 20:47:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.650 20:47:32 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.650 20:47:32 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:04.650 00:05:04.650 real 0m13.106s 00:05:04.650 user 0m1.553s 00:05:04.650 sys 0m6.490s 00:05:04.650 20:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.650 ************************************ 00:05:04.650 END TEST devices 00:05:04.650 ************************************ 00:05:04.650 20:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.650 00:05:04.650 real 0m27.199s 00:05:04.650 user 0m6.439s 00:05:04.650 sys 0m15.939s 00:05:04.650 20:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.650 ************************************ 00:05:04.650 END TEST setup.sh 00:05:04.650 ************************************ 00:05:04.650 20:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:04.650 20:47:32 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:04.650 Hugepages 00:05:04.650 node hugesize free / total 00:05:04.909 node0 1048576kB 0 / 0 00:05:04.909 node0 2048kB 2048 / 2048 00:05:04.909 00:05:04.909 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.909 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:04.909 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:04.909 20:47:33 -- spdk/autotest.sh@141 -- # uname -s 00:05:04.909 20:47:33 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:04.909 20:47:33 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:04.909 20:47:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.477 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.413 20:47:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:07.790 20:47:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:07.790 20:47:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:07.790 20:47:35 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.790 20:47:35 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:07.790 20:47:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:07.790 20:47:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:07.790 20:47:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.790 20:47:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:07.790 20:47:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:07.790 20:47:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:07.790 20:47:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:07.790 20:47:35 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:07.790 Waiting for block devices as requested 00:05:08.049 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:08.049 20:47:36 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:08.049 20:47:36 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:08.049 20:47:36 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:08.049 20:47:36 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:08.049 20:47:36 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:08.049 20:47:36 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:08.049 20:47:36 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:08.049 20:47:36 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:08.049 20:47:36 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:08.049 20:47:36 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:08.049 20:47:36 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:08.049 20:47:36 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:08.049 20:47:36 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:08.049 20:47:36 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:08.049 20:47:36 -- common/autotest_common.sh@1542 -- # continue 00:05:08.049 20:47:36 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:08.049 20:47:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:08.049 20:47:36 -- common/autotest_common.sh@10 -- # set +x 00:05:08.049 20:47:36 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:08.049 20:47:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:08.049 20:47:36 -- common/autotest_common.sh@10 -- # set +x 00:05:08.049 20:47:36 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:08.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:08.566 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.502 20:47:37 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:09.502 20:47:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:09.502 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:05:09.762 20:47:37 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:09.762 20:47:37 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:09.762 20:47:37 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:09.762 20:47:37 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:09.762 20:47:37 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:09.762 20:47:37 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:09.762 20:47:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:09.762 20:47:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:09.762 20:47:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.762 20:47:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.762 20:47:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:09.762 20:47:37 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:09.762 20:47:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:09.762 20:47:37 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:09.762 20:47:37 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:09.762 20:47:37 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:09.762 20:47:37 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:09.762 20:47:37 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:09.762 20:47:37 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:09.762 20:47:37 -- common/autotest_common.sh@1578 -- # return 0 00:05:09.762 20:47:37 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:09.762 20:47:37 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:09.762 20:47:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.762 20:47:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.762 20:47:37 -- common/autotest_common.sh@10 -- # set +x 00:05:09.762 ************************************ 00:05:09.762 START TEST unittest 00:05:09.762 ************************************ 00:05:09.762 20:47:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:09.762 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:09.762 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:09.762 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:09.762 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:09.762 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:09.762 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:09.762 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:09.762 ++ rpc_py=rpc_cmd 00:05:09.762 ++ set -e 00:05:09.762 ++ shopt -s nullglob 00:05:09.762 ++ shopt -s extglob 00:05:09.762 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:09.762 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:09.762 +++ CONFIG_WPDK_DIR= 00:05:09.762 +++ CONFIG_ASAN=y 00:05:09.762 +++ CONFIG_VBDEV_COMPRESS=n 00:05:09.762 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:09.762 +++ CONFIG_USDT=n 00:05:09.762 +++ CONFIG_CUSTOMOCF=n 00:05:09.762 +++ CONFIG_PREFIX=/usr/local 00:05:09.762 +++ CONFIG_RBD=n 00:05:09.762 +++ CONFIG_LIBDIR= 00:05:09.762 +++ CONFIG_IDXD=y 00:05:09.762 +++ CONFIG_NVME_CUSE=y 00:05:09.762 +++ CONFIG_SMA=n 00:05:09.762 +++ CONFIG_VTUNE=n 00:05:09.762 +++ CONFIG_TSAN=n 00:05:09.762 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:09.762 +++ CONFIG_VFIO_USER_DIR= 00:05:09.762 +++ CONFIG_PGO_CAPTURE=n 00:05:09.762 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:09.762 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:09.762 +++ CONFIG_LTO=n 00:05:09.762 +++ CONFIG_ISCSI_INITIATOR=y 00:05:09.762 +++ CONFIG_CET=n 00:05:09.762 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:09.762 +++ CONFIG_OCF_PATH= 00:05:09.762 +++ CONFIG_RDMA_SET_TOS=y 00:05:09.762 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:09.762 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:09.762 +++ CONFIG_UBLK=n 00:05:09.762 +++ CONFIG_ISAL_CRYPTO=y 00:05:09.762 +++ CONFIG_OPENSSL_PATH= 00:05:09.762 +++ CONFIG_OCF=n 00:05:09.762 +++ CONFIG_FUSE=n 00:05:09.762 +++ CONFIG_VTUNE_DIR= 00:05:09.762 +++ CONFIG_FUZZER_LIB= 00:05:09.762 +++ CONFIG_FUZZER=n 00:05:09.762 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:09.762 +++ CONFIG_CRYPTO=n 00:05:09.762 +++ CONFIG_PGO_USE=n 00:05:09.762 +++ CONFIG_VHOST=y 00:05:09.762 +++ CONFIG_DAOS=n 00:05:09.762 +++ CONFIG_DPDK_INC_DIR= 00:05:09.762 +++ CONFIG_DAOS_DIR= 00:05:09.762 +++ CONFIG_UNIT_TESTS=y 00:05:09.762 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:09.762 +++ CONFIG_VIRTIO=y 00:05:09.762 +++ CONFIG_COVERAGE=y 00:05:09.762 +++ CONFIG_RDMA=y 00:05:09.762 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:09.762 +++ CONFIG_URING_PATH= 00:05:09.762 +++ CONFIG_XNVME=n 00:05:09.762 +++ CONFIG_VFIO_USER=n 00:05:09.762 +++ CONFIG_ARCH=native 00:05:09.762 +++ CONFIG_URING_ZNS=n 00:05:09.762 +++ CONFIG_WERROR=y 00:05:09.762 +++ CONFIG_HAVE_LIBBSD=n 00:05:09.762 +++ CONFIG_UBSAN=y 00:05:09.762 +++ CONFIG_IPSEC_MB_DIR= 00:05:09.762 +++ CONFIG_GOLANG=n 00:05:09.762 +++ CONFIG_ISAL=y 00:05:09.762 +++ CONFIG_IDXD_KERNEL=n 00:05:09.762 +++ CONFIG_DPDK_LIB_DIR= 00:05:09.762 +++ CONFIG_RDMA_PROV=verbs 00:05:09.762 +++ CONFIG_APPS=y 00:05:09.762 +++ CONFIG_SHARED=n 00:05:09.762 +++ CONFIG_FC_PATH= 00:05:09.762 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:09.762 +++ CONFIG_FC=n 00:05:09.762 +++ CONFIG_AVAHI=n 00:05:09.762 +++ CONFIG_FIO_PLUGIN=y 00:05:09.762 +++ CONFIG_RAID5F=n 00:05:09.762 +++ CONFIG_EXAMPLES=y 00:05:09.762 +++ CONFIG_TESTS=y 00:05:09.762 +++ CONFIG_CRYPTO_MLX5=n 00:05:09.762 +++ CONFIG_MAX_LCORES= 00:05:09.762 +++ CONFIG_IPSEC_MB=n 00:05:09.762 +++ CONFIG_DEBUG=y 00:05:09.762 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:09.762 +++ CONFIG_CROSS_PREFIX= 00:05:09.762 +++ CONFIG_URING=n 00:05:09.762 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:09.762 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:09.762 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:09.762 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:09.762 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:09.762 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:09.762 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:09.762 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:09.762 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:09.762 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:09.762 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:09.762 +++ VHOST_APP=("$_app_dir/vhost") 00:05:09.762 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:09.762 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:09.762 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:09.762 +++ [[ #ifndef SPDK_CONFIG_H 00:05:09.762 #define SPDK_CONFIG_H 00:05:09.762 #define SPDK_CONFIG_APPS 1 00:05:09.762 #define SPDK_CONFIG_ARCH native 00:05:09.762 #define SPDK_CONFIG_ASAN 1 00:05:09.762 #undef SPDK_CONFIG_AVAHI 00:05:09.762 #undef SPDK_CONFIG_CET 00:05:09.762 #define SPDK_CONFIG_COVERAGE 1 00:05:09.762 #define SPDK_CONFIG_CROSS_PREFIX 00:05:09.762 #undef SPDK_CONFIG_CRYPTO 00:05:09.762 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:09.762 #undef SPDK_CONFIG_CUSTOMOCF 00:05:09.762 #undef SPDK_CONFIG_DAOS 00:05:09.763 #define SPDK_CONFIG_DAOS_DIR 00:05:09.763 #define SPDK_CONFIG_DEBUG 1 00:05:09.763 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:09.763 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:09.763 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:09.763 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:09.763 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:09.763 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:09.763 #define SPDK_CONFIG_EXAMPLES 1 00:05:09.763 #undef SPDK_CONFIG_FC 00:05:09.763 #define SPDK_CONFIG_FC_PATH 00:05:09.763 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:09.763 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:09.763 #undef SPDK_CONFIG_FUSE 00:05:09.763 #undef SPDK_CONFIG_FUZZER 00:05:09.763 #define SPDK_CONFIG_FUZZER_LIB 00:05:09.763 #undef SPDK_CONFIG_GOLANG 00:05:09.763 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:09.763 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:09.763 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:09.763 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:09.763 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:09.763 #define SPDK_CONFIG_IDXD 1 00:05:09.763 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:09.763 #undef SPDK_CONFIG_IPSEC_MB 00:05:09.763 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:09.763 #define SPDK_CONFIG_ISAL 1 00:05:09.763 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:09.763 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:09.763 #define SPDK_CONFIG_LIBDIR 00:05:09.763 #undef SPDK_CONFIG_LTO 00:05:09.763 #define SPDK_CONFIG_MAX_LCORES 00:05:09.763 #define SPDK_CONFIG_NVME_CUSE 1 00:05:09.763 #undef SPDK_CONFIG_OCF 00:05:09.763 #define SPDK_CONFIG_OCF_PATH 00:05:09.763 #define SPDK_CONFIG_OPENSSL_PATH 00:05:09.763 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:09.763 #undef SPDK_CONFIG_PGO_USE 00:05:09.763 #define SPDK_CONFIG_PREFIX /usr/local 00:05:09.763 #undef SPDK_CONFIG_RAID5F 00:05:09.763 #undef SPDK_CONFIG_RBD 00:05:09.763 #define SPDK_CONFIG_RDMA 1 00:05:09.763 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:09.763 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:09.763 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:09.763 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:09.763 #undef SPDK_CONFIG_SHARED 00:05:09.763 #undef SPDK_CONFIG_SMA 00:05:09.763 #define SPDK_CONFIG_TESTS 1 00:05:09.763 #undef SPDK_CONFIG_TSAN 00:05:09.763 #undef SPDK_CONFIG_UBLK 00:05:09.763 #define SPDK_CONFIG_UBSAN 1 00:05:09.763 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:09.763 #undef SPDK_CONFIG_URING 00:05:09.763 #define SPDK_CONFIG_URING_PATH 00:05:09.763 #undef SPDK_CONFIG_URING_ZNS 00:05:09.763 #undef SPDK_CONFIG_USDT 00:05:09.763 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:09.763 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:09.763 #undef SPDK_CONFIG_VFIO_USER 00:05:09.763 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:09.763 #define SPDK_CONFIG_VHOST 1 00:05:09.763 #define SPDK_CONFIG_VIRTIO 1 00:05:09.763 #undef SPDK_CONFIG_VTUNE 00:05:09.763 #define SPDK_CONFIG_VTUNE_DIR 00:05:09.763 #define SPDK_CONFIG_WERROR 1 00:05:09.763 #define SPDK_CONFIG_WPDK_DIR 00:05:09.763 #undef SPDK_CONFIG_XNVME 00:05:09.763 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:09.763 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:09.763 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.763 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:09.763 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.763 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.763 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:09.763 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:09.763 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:09.763 ++++ export PATH 00:05:09.763 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:09.763 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:09.763 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:09.763 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:09.763 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:09.763 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:09.763 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:09.763 +++ TEST_TAG=N/A 00:05:09.763 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:09.763 ++ : 1 00:05:09.763 ++ export RUN_NIGHTLY 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_RUN_VALGRIND 00:05:09.763 ++ : 1 00:05:09.763 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:09.763 ++ : 1 00:05:09.763 ++ export SPDK_TEST_UNITTEST 00:05:09.763 ++ : 00:05:09.763 ++ export SPDK_TEST_AUTOBUILD 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_RELEASE_BUILD 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_ISAL 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_ISCSI 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:09.763 ++ : 1 00:05:09.763 ++ export SPDK_TEST_NVME 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_NVME_PMR 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_NVME_BP 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_NVME_CLI 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_NVME_CUSE 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_NVME_FDP 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_NVMF 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_VFIOUSER 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_FUZZER 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_FUZZER_SHORT 00:05:09.763 ++ : rdma 00:05:09.763 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_RBD 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_VHOST 00:05:09.763 ++ : 1 00:05:09.763 ++ export SPDK_TEST_BLOCKDEV 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_IOAT 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_BLOBFS 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_VHOST_INIT 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_LVOL 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:09.763 ++ : 1 00:05:09.763 ++ export SPDK_RUN_ASAN 00:05:09.763 ++ : 1 00:05:09.763 ++ export SPDK_RUN_UBSAN 00:05:09.763 ++ : 00:05:09.763 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_RUN_NON_ROOT 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_CRYPTO 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_FTL 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_OCF 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_VMD 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_OPAL 00:05:09.763 ++ : 00:05:09.763 ++ export SPDK_TEST_NATIVE_DPDK 00:05:09.763 ++ : true 00:05:09.763 ++ export SPDK_AUTOTEST_X 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_RAID5 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_URING 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_USDT 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_USE_IGB_UIO 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_SCHEDULER 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_SCANBUILD 00:05:09.763 ++ : 00:05:09.763 ++ export SPDK_TEST_NVMF_NICS 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_SMA 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_DAOS 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_XNVME 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_ACCEL_DSA 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_ACCEL_IAA 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_ACCEL_IOAT 00:05:09.763 ++ : 00:05:09.763 ++ export SPDK_TEST_FUZZER_TARGET 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_TEST_NVMF_MDNS 00:05:09.763 ++ : 0 00:05:09.763 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:09.763 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:09.763 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:09.763 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:09.763 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:09.763 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:09.763 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:09.763 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:09.763 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:09.763 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:09.763 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:09.763 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:09.763 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:09.763 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:09.763 ++ PYTHONDONTWRITEBYTECODE=1 00:05:09.763 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:09.764 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:09.764 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:09.764 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:09.764 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:09.764 ++ rm -rf /var/tmp/asan_suppression_file 00:05:09.764 ++ cat 00:05:09.764 ++ echo leak:libfuse3.so 00:05:09.764 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:09.764 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:09.764 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:09.764 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:09.764 ++ '[' -z /var/spdk/dependencies ']' 00:05:09.764 ++ export DEPENDENCY_DIR 00:05:09.764 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:09.764 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:09.764 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:09.764 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:09.764 ++ export QEMU_BIN= 00:05:09.764 ++ QEMU_BIN= 00:05:09.764 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:09.764 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:09.764 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:09.764 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:09.764 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:09.764 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:09.764 ++ '[' 0 -eq 0 ']' 00:05:09.764 ++ export valgrind= 00:05:09.764 ++ valgrind= 00:05:09.764 +++ uname -s 00:05:09.764 ++ '[' Linux = Linux ']' 00:05:09.764 ++ HUGEMEM=4096 00:05:09.764 ++ export CLEAR_HUGE=yes 00:05:09.764 ++ CLEAR_HUGE=yes 00:05:09.764 ++ [[ 0 -eq 1 ]] 00:05:09.764 ++ [[ 0 -eq 1 ]] 00:05:09.764 ++ MAKE=make 00:05:09.764 +++ nproc 00:05:09.764 ++ MAKEFLAGS=-j10 00:05:09.764 ++ export HUGEMEM=4096 00:05:09.764 ++ HUGEMEM=4096 00:05:09.764 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:09.764 ++ NO_HUGE=() 00:05:09.764 ++ TEST_MODE= 00:05:09.764 ++ [[ -z '' ]] 00:05:09.764 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:09.764 ++ exec 00:05:09.764 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:09.764 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:09.764 ++ set_test_storage 2147483648 00:05:09.764 ++ [[ -v testdir ]] 00:05:09.764 ++ local requested_size=2147483648 00:05:09.764 ++ local mount target_dir 00:05:09.764 ++ local -A mounts fss sizes avails uses 00:05:09.764 ++ local source fs size avail mount use 00:05:09.764 ++ local storage_fallback storage_candidates 00:05:09.764 +++ mktemp -udt spdk.XXXXXX 00:05:09.764 ++ storage_fallback=/tmp/spdk.PXy3BF 00:05:09.764 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:09.764 ++ [[ -n '' ]] 00:05:09.764 ++ [[ -n '' ]] 00:05:09.764 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.PXy3BF/tests/unit /tmp/spdk.PXy3BF 00:05:09.764 ++ requested_size=2214592512 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 +++ df -T 00:05:09.764 +++ grep -v Filesystem 00:05:09.764 ++ mounts["$mount"]=tmpfs 00:05:09.764 ++ fss["$mount"]=tmpfs 00:05:09.764 ++ avails["$mount"]=1252601856 00:05:09.764 ++ sizes["$mount"]=1253683200 00:05:09.764 ++ uses["$mount"]=1081344 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 ++ mounts["$mount"]=/dev/vda1 00:05:09.764 ++ fss["$mount"]=ext4 00:05:09.764 ++ avails["$mount"]=10484330496 00:05:09.764 ++ sizes["$mount"]=20616794112 00:05:09.764 ++ uses["$mount"]=10115686400 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 ++ mounts["$mount"]=tmpfs 00:05:09.764 ++ fss["$mount"]=tmpfs 00:05:09.764 ++ avails["$mount"]=6268395520 00:05:09.764 ++ sizes["$mount"]=6268395520 00:05:09.764 ++ uses["$mount"]=0 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 ++ mounts["$mount"]=tmpfs 00:05:09.764 ++ fss["$mount"]=tmpfs 00:05:09.764 ++ avails["$mount"]=5242880 00:05:09.764 ++ sizes["$mount"]=5242880 00:05:09.764 ++ uses["$mount"]=0 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 ++ mounts["$mount"]=/dev/vda15 00:05:09.764 ++ fss["$mount"]=vfat 00:05:09.764 ++ avails["$mount"]=103061504 00:05:09.764 ++ sizes["$mount"]=109395968 00:05:09.764 ++ uses["$mount"]=6334464 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 ++ mounts["$mount"]=tmpfs 00:05:09.764 ++ fss["$mount"]=tmpfs 00:05:09.764 ++ avails["$mount"]=1253675008 00:05:09.764 ++ sizes["$mount"]=1253679104 00:05:09.764 ++ uses["$mount"]=4096 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:05:09.764 ++ fss["$mount"]=fuse.sshfs 00:05:09.764 ++ avails["$mount"]=96584323072 00:05:09.764 ++ sizes["$mount"]=105088212992 00:05:09.764 ++ uses["$mount"]=3118456832 00:05:09.764 ++ read -r source fs size use avail _ mount 00:05:09.764 ++ printf '* Looking for test storage...\n' 00:05:09.764 * Looking for test storage... 00:05:09.764 ++ local target_space new_size 00:05:09.764 ++ for target_dir in "${storage_candidates[@]}" 00:05:09.764 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:09.764 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:09.764 ++ mount=/ 00:05:09.764 ++ target_space=10484330496 00:05:09.764 ++ (( target_space == 0 || target_space < requested_size )) 00:05:09.764 ++ (( target_space >= requested_size )) 00:05:09.764 ++ [[ ext4 == tmpfs ]] 00:05:09.764 ++ [[ ext4 == ramfs ]] 00:05:09.764 ++ [[ / == / ]] 00:05:09.764 ++ new_size=12330278912 00:05:09.764 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:09.764 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:09.764 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:09.764 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:09.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:09.764 ++ return 0 00:05:09.764 ++ set -o errtrace 00:05:09.764 ++ shopt -s extdebug 00:05:09.764 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:09.764 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:09.764 20:47:37 -- common/autotest_common.sh@1672 -- # true 00:05:09.764 20:47:37 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:09.764 20:47:37 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:09.764 20:47:37 -- common/autotest_common.sh@29 -- # exec 00:05:09.764 20:47:37 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:09.764 20:47:37 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:09.764 20:47:37 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:09.764 20:47:37 -- common/autotest_common.sh@18 -- # set -x 00:05:09.764 20:47:37 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:09.764 20:47:37 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:09.764 20:47:37 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:09.764 20:47:37 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:09.764 20:47:37 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:09.764 20:47:37 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:09.764 20:47:37 -- unit/unittest.sh@179 -- # hash lcov 00:05:09.764 20:47:37 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:09.764 20:47:37 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:09.764 20:47:37 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:09.764 20:47:37 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:09.764 20:47:37 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:09.764 20:47:37 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:09.764 20:47:37 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:09.764 20:47:37 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:09.764 --rc lcov_branch_coverage=1 00:05:09.764 --rc lcov_function_coverage=1 00:05:09.764 --rc genhtml_branch_coverage=1 00:05:09.764 --rc genhtml_function_coverage=1 00:05:09.764 --rc genhtml_legend=1 00:05:09.764 --rc geninfo_all_blocks=1 00:05:09.764 ' 00:05:09.764 20:47:37 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:09.764 --rc lcov_branch_coverage=1 00:05:09.764 --rc lcov_function_coverage=1 00:05:09.764 --rc genhtml_branch_coverage=1 00:05:09.764 --rc genhtml_function_coverage=1 00:05:09.764 --rc genhtml_legend=1 00:05:09.764 --rc geninfo_all_blocks=1 00:05:09.764 ' 00:05:09.764 20:47:37 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:09.764 --rc lcov_branch_coverage=1 00:05:09.764 --rc lcov_function_coverage=1 00:05:09.764 --rc genhtml_branch_coverage=1 00:05:09.764 --rc genhtml_function_coverage=1 00:05:09.764 --rc genhtml_legend=1 00:05:09.764 --rc geninfo_all_blocks=1 00:05:09.764 --no-external' 00:05:09.764 20:47:37 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:09.764 --rc lcov_branch_coverage=1 00:05:09.764 --rc lcov_function_coverage=1 00:05:09.764 --rc genhtml_branch_coverage=1 00:05:09.764 --rc genhtml_function_coverage=1 00:05:09.764 --rc genhtml_legend=1 00:05:09.764 --rc geninfo_all_blocks=1 00:05:09.764 --no-external' 00:05:09.764 20:47:37 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:24.666 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:24.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:24.666 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:24.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:24.666 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:24.666 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:51.207 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:51.207 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:51.208 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:51.208 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:51.208 20:48:19 -- unit/unittest.sh@206 -- # uname -m 00:05:51.208 20:48:19 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:51.208 20:48:19 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:51.208 20:48:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.208 20:48:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.208 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:51.208 ************************************ 00:05:51.208 START TEST unittest_pci_event 00:05:51.208 ************************************ 00:05:51.208 20:48:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:51.208 00:05:51.208 00:05:51.208 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.208 http://cunit.sourceforge.net/ 00:05:51.208 00:05:51.208 00:05:51.208 Suite: pci_event 00:05:51.208 Test: test_pci_parse_event ...[2024-06-09 20:48:19.316827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:51.209 [2024-06-09 20:48:19.317480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:51.209 passed 00:05:51.209 00:05:51.209 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.209 suites 1 1 n/a 0 0 00:05:51.209 tests 1 1 1 0 0 00:05:51.209 asserts 15 15 15 0 n/a 00:05:51.209 00:05:51.209 Elapsed time = 0.001 seconds 00:05:51.209 00:05:51.209 real 0m0.029s 00:05:51.209 user 0m0.013s 00:05:51.209 sys 0m0.014s 00:05:51.209 20:48:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.209 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:51.209 ************************************ 00:05:51.209 END TEST unittest_pci_event 00:05:51.209 ************************************ 00:05:51.209 20:48:19 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:51.209 20:48:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.209 20:48:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.209 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:51.468 ************************************ 00:05:51.468 START TEST unittest_include 00:05:51.468 ************************************ 00:05:51.468 20:48:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:51.468 00:05:51.468 00:05:51.468 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.468 http://cunit.sourceforge.net/ 00:05:51.468 00:05:51.468 00:05:51.468 Suite: histogram 00:05:51.468 Test: histogram_test ...passed 00:05:51.468 Test: histogram_merge ...passed 00:05:51.468 00:05:51.468 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.468 suites 1 1 n/a 0 0 00:05:51.468 tests 2 2 2 0 0 00:05:51.468 asserts 50 50 50 0 n/a 00:05:51.468 00:05:51.468 Elapsed time = 0.006 seconds 00:05:51.468 00:05:51.468 real 0m0.036s 00:05:51.468 user 0m0.021s 00:05:51.468 sys 0m0.016s 00:05:51.468 20:48:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.468 ************************************ 00:05:51.468 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:51.468 END TEST unittest_include 00:05:51.468 ************************************ 00:05:51.468 20:48:19 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:51.468 20:48:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.468 20:48:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.468 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:51.468 ************************************ 00:05:51.468 START TEST unittest_bdev 00:05:51.468 ************************************ 00:05:51.468 20:48:19 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:51.468 20:48:19 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:51.468 00:05:51.468 00:05:51.468 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.468 http://cunit.sourceforge.net/ 00:05:51.468 00:05:51.468 00:05:51.468 Suite: bdev 00:05:51.468 Test: bytes_to_blocks_test ...passed 00:05:51.468 Test: num_blocks_test ...passed 00:05:51.468 Test: io_valid_test ...passed 00:05:51.468 Test: open_write_test ...[2024-06-09 20:48:19.561236] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:51.468 [2024-06-09 20:48:19.562132] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:51.468 [2024-06-09 20:48:19.562469] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:51.468 passed 00:05:51.468 Test: claim_test ...passed 00:05:51.726 Test: alias_add_del_test ...[2024-06-09 20:48:19.653090] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:51.726 [2024-06-09 20:48:19.653459] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:51.726 [2024-06-09 20:48:19.653725] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:51.726 passed 00:05:51.726 Test: get_device_stat_test ...passed 00:05:51.726 Test: bdev_io_types_test ...passed 00:05:51.726 Test: bdev_io_wait_test ...passed 00:05:51.726 Test: bdev_io_spans_split_test ...passed 00:05:51.726 Test: bdev_io_boundary_split_test ...passed 00:05:51.726 Test: bdev_io_max_size_and_segment_split_test ...[2024-06-09 20:48:19.818721] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:51.726 passed 00:05:51.726 Test: bdev_io_mix_split_test ...passed 00:05:51.726 Test: bdev_io_split_with_io_wait ...passed 00:05:51.985 Test: bdev_io_write_unit_split_test ...[2024-06-09 20:48:19.915199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:51.985 [2024-06-09 20:48:19.915621] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:51.985 [2024-06-09 20:48:19.915809] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:51.985 [2024-06-09 20:48:19.916037] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:51.985 passed 00:05:51.985 Test: bdev_io_alignment_with_boundary ...passed 00:05:51.985 Test: bdev_io_alignment ...passed 00:05:51.985 Test: bdev_histograms ...passed 00:05:51.985 Test: bdev_write_zeroes ...passed 00:05:51.985 Test: bdev_compare_and_write ...passed 00:05:52.244 Test: bdev_compare ...passed 00:05:52.244 Test: bdev_compare_emulated ...passed 00:05:52.244 Test: bdev_zcopy_write ...passed 00:05:52.244 Test: bdev_zcopy_read ...passed 00:05:52.244 Test: bdev_open_while_hotremove ...passed 00:05:52.244 Test: bdev_close_while_hotremove ...passed 00:05:52.244 Test: bdev_open_ext_test ...[2024-06-09 20:48:20.355669] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:52.244 passed 00:05:52.244 Test: bdev_open_ext_unregister ...passed 00:05:52.244 Test: bdev_set_io_timeout ...[2024-06-09 20:48:20.355910] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:52.244 passed 00:05:52.503 Test: bdev_set_qd_sampling ...passed 00:05:52.503 Test: lba_range_overlap ...passed 00:05:52.503 Test: lock_lba_range_check_ranges ...passed 00:05:52.503 Test: lock_lba_range_with_io_outstanding ...passed 00:05:52.503 Test: lock_lba_range_overlapped ...passed 00:05:52.503 Test: bdev_quiesce ...[2024-06-09 20:48:20.525383] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:52.503 passed 00:05:52.503 Test: bdev_io_abort ...passed 00:05:52.503 Test: bdev_unmap ...passed 00:05:52.503 Test: bdev_write_zeroes_split_test ...passed 00:05:52.503 Test: bdev_set_options_test ...passed 00:05:52.503 Test: bdev_get_memory_domains ...passed 00:05:52.503 Test: bdev_io_ext ...[2024-06-09 20:48:20.638458] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:52.503 passed 00:05:52.762 Test: bdev_io_ext_no_opts ...passed 00:05:52.762 Test: bdev_io_ext_invalid_opts ...passed 00:05:52.762 Test: bdev_io_ext_split ...passed 00:05:52.762 Test: bdev_io_ext_bounce_buffer ...passed 00:05:52.762 Test: bdev_register_uuid_alias ...[2024-06-09 20:48:20.826153] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name eb6b64f3-b314-489e-ba47-593e63bb7364 already exists 00:05:52.762 [2024-06-09 20:48:20.826240] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:eb6b64f3-b314-489e-ba47-593e63bb7364 alias for bdev bdev0 00:05:52.762 passed 00:05:52.762 Test: bdev_unregister_by_name ...[2024-06-09 20:48:20.842956] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:52.762 passed 00:05:52.762 Test: for_each_bdev_test ...[2024-06-09 20:48:20.843019] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:52.762 passed 00:05:52.762 Test: bdev_seek_test ...passed 00:05:52.762 Test: bdev_copy ...passed 00:05:53.024 Test: bdev_copy_split_test ...passed 00:05:53.024 Test: examine_locks ...passed 00:05:53.024 Test: claim_v2_rwo ...[2024-06-09 20:48:20.939051] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939144] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939163] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:53.024 passed 00:05:53.024 Test: claim_v2_rom ...[2024-06-09 20:48:20.939215] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939233] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939286] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:53.024 [2024-06-09 20:48:20.939435] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939488] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:53.024 passed 00:05:53.024 Test: claim_v2_rwm ...[2024-06-09 20:48:20.939521] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939586] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:53.024 [2024-06-09 20:48:20.939618] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:53.024 [2024-06-09 20:48:20.939738] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:53.024 [2024-06-09 20:48:20.939802] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939829] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939852] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939870] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:53.024 passed 00:05:53.024 Test: claim_v2_existing_writer ...[2024-06-09 20:48:20.939894] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:53.024 [2024-06-09 20:48:20.939935] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:53.024 [2024-06-09 20:48:20.940063] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:53.024 [2024-06-09 20:48:20.940096] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:53.024 passed 00:05:53.024 Test: claim_v2_existing_v1 ...passed 00:05:53.024 Test: claim_v1_existing_v2 ...[2024-06-09 20:48:20.940215] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:53.025 [2024-06-09 20:48:20.940247] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:53.025 [2024-06-09 20:48:20.940265] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:53.025 [2024-06-09 20:48:20.940379] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:53.025 passed 00:05:53.025 Test: examine_claimed ...[2024-06-09 20:48:20.940439] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:53.025 [2024-06-09 20:48:20.940473] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:53.025 [2024-06-09 20:48:20.940745] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:53.025 passed 00:05:53.025 00:05:53.025 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.025 suites 1 1 n/a 0 0 00:05:53.025 tests 59 59 59 0 0 00:05:53.025 asserts 4599 4599 4599 0 n/a 00:05:53.025 00:05:53.025 Elapsed time = 1.445 seconds 00:05:53.025 20:48:20 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:53.025 00:05:53.025 00:05:53.025 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.025 http://cunit.sourceforge.net/ 00:05:53.025 00:05:53.025 00:05:53.025 Suite: nvme 00:05:53.025 Test: test_create_ctrlr ...passed 00:05:53.025 Test: test_reset_ctrlr ...[2024-06-09 20:48:20.989332] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:53.025 Test: test_failover_ctrlr ...passed 00:05:53.025 Test: test_race_between_failover_and_add_secondary_trid ...[2024-06-09 20:48:20.992086] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:20.992327] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:20.992559] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_pending_reset ...[2024-06-09 20:48:20.994200] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:20.994501] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_attach_ctrlr ...[2024-06-09 20:48:20.995648] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:53.025 passed 00:05:53.025 Test: test_aer_cb ...passed 00:05:53.025 Test: test_submit_nvme_cmd ...passed 00:05:53.025 Test: test_add_remove_trid ...passed 00:05:53.025 Test: test_abort ...[2024-06-09 20:48:20.998951] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7221:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:53.025 passed 00:05:53.025 Test: test_get_io_qpair ...passed 00:05:53.025 Test: test_bdev_unregister ...passed 00:05:53.025 Test: test_compare_ns ...passed 00:05:53.025 Test: test_init_ana_log_page ...passed 00:05:53.025 Test: test_get_memory_domains ...passed 00:05:53.025 Test: test_reconnect_qpair ...[2024-06-09 20:48:21.001874] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_create_bdev_ctrlr ...[2024-06-09 20:48:21.002383] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5273:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:53.025 passed 00:05:53.025 Test: test_add_multi_ns_to_bdev ...[2024-06-09 20:48:21.003716] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4486:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:53.025 passed 00:05:53.025 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:53.025 Test: test_admin_path ...passed 00:05:53.025 Test: test_reset_bdev_ctrlr ...passed 00:05:53.025 Test: test_find_io_path ...passed 00:05:53.025 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:53.025 Test: test_retry_io_for_io_path_error ...passed 00:05:53.025 Test: test_retry_io_count ...passed 00:05:53.025 Test: test_concurrent_read_ana_log_page ...passed 00:05:53.025 Test: test_retry_io_for_ana_error ...passed 00:05:53.025 Test: test_check_io_error_resiliency_params ...[2024-06-09 20:48:21.011203] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5926:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:53.025 [2024-06-09 20:48:21.011280] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5930:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:53.025 [2024-06-09 20:48:21.011308] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5939:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:53.025 [2024-06-09 20:48:21.011355] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5942:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:53.025 [2024-06-09 20:48:21.011387] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:53.025 [2024-06-09 20:48:21.011444] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:53.025 [2024-06-09 20:48:21.011479] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5934:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:53.025 [2024-06-09 20:48:21.011526] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5949:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:53.025 [2024-06-09 20:48:21.011567] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5946:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:53.025 passed 00:05:53.025 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:53.025 Test: test_reconnect_ctrlr ...[2024-06-09 20:48:21.012404] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.012548] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.012839] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.013006] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.013189] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_retry_failover_ctrlr ...[2024-06-09 20:48:21.013611] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_fail_path ...[2024-06-09 20:48:21.014288] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.014455] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.014586] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.014699] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_nvme_ns_cmp ...passed[2024-06-09 20:48:21.014835] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 00:05:53.025 Test: test_ana_transition ...passed 00:05:53.025 Test: test_set_preferred_path ...passed 00:05:53.025 Test: test_find_next_io_path ...passed 00:05:53.025 Test: test_find_io_path_min_qd ...passed 00:05:53.025 Test: test_disable_auto_failback ...[2024-06-09 20:48:21.016575] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_set_multipath_policy ...passed 00:05:53.025 Test: test_uuid_generation ...passed 00:05:53.025 Test: test_retry_io_to_same_path ...passed 00:05:53.025 Test: test_race_between_reset_and_disconnected ...passed 00:05:53.025 Test: test_ctrlr_op_rpc ...passed 00:05:53.025 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:53.025 Test: test_disable_enable_ctrlr ...[2024-06-09 20:48:21.020386] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 [2024-06-09 20:48:21.020575] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:53.025 passed 00:05:53.025 Test: test_delete_ctrlr_done ...passed 00:05:53.025 Test: test_ns_remove_during_reset ...passed 00:05:53.025 00:05:53.025 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.025 suites 1 1 n/a 0 0 00:05:53.025 tests 48 48 48 0 0 00:05:53.025 asserts 3553 3553 3553 0 n/a 00:05:53.025 00:05:53.025 Elapsed time = 0.034 seconds 00:05:53.025 20:48:21 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:53.025 Test Options 00:05:53.025 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:53.025 00:05:53.025 00:05:53.025 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.025 http://cunit.sourceforge.net/ 00:05:53.025 00:05:53.025 00:05:53.025 Suite: raid 00:05:53.025 Test: test_create_raid ...passed 00:05:53.025 Test: test_create_raid_superblock ...passed 00:05:53.025 Test: test_delete_raid ...passed 00:05:53.025 Test: test_create_raid_invalid_args ...[2024-06-09 20:48:21.067441] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:53.025 [2024-06-09 20:48:21.067872] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:53.025 [2024-06-09 20:48:21.068373] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:53.025 [2024-06-09 20:48:21.068634] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:53.026 [2024-06-09 20:48:21.069421] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:53.026 passed 00:05:53.026 Test: test_delete_raid_invalid_args ...passed 00:05:53.026 Test: test_io_channel ...passed 00:05:53.026 Test: test_reset_io ...passed 00:05:53.026 Test: test_write_io ...passed 00:05:53.026 Test: test_read_io ...passed 00:05:53.969 Test: test_unmap_io ...passed 00:05:53.969 Test: test_io_failure ...[2024-06-09 20:48:21.865829] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:53.969 passed 00:05:53.969 Test: test_multi_raid_no_io ...passed 00:05:53.969 Test: test_multi_raid_with_io ...passed 00:05:53.969 Test: test_io_type_supported ...passed 00:05:53.969 Test: test_raid_json_dump_info ...passed 00:05:53.969 Test: test_context_size ...passed 00:05:53.969 Test: test_raid_level_conversions ...passed 00:05:53.969 Test: test_raid_process ...passed 00:05:53.969 Test: test_raid_io_split ...passed 00:05:53.969 00:05:53.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.969 suites 1 1 n/a 0 0 00:05:53.969 tests 19 19 19 0 0 00:05:53.969 asserts 177879 177879 177879 0 n/a 00:05:53.969 00:05:53.969 Elapsed time = 0.809 seconds 00:05:53.969 20:48:21 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:53.969 00:05:53.969 00:05:53.969 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.969 http://cunit.sourceforge.net/ 00:05:53.969 00:05:53.969 00:05:53.969 Suite: raid_sb 00:05:53.969 Test: test_raid_bdev_write_superblock ...passed 00:05:53.969 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:53.969 Test: test_raid_bdev_parse_superblock ...[2024-06-09 20:48:21.913979] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:53.969 passed 00:05:53.969 00:05:53.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.969 suites 1 1 n/a 0 0 00:05:53.969 tests 3 3 3 0 0 00:05:53.969 asserts 32 32 32 0 n/a 00:05:53.969 00:05:53.969 Elapsed time = 0.001 seconds 00:05:53.969 20:48:21 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:53.969 00:05:53.969 00:05:53.969 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.969 http://cunit.sourceforge.net/ 00:05:53.969 00:05:53.969 00:05:53.969 Suite: concat 00:05:53.969 Test: test_concat_start ...passed 00:05:53.969 Test: test_concat_rw ...passed 00:05:53.969 Test: test_concat_null_payload ...passed 00:05:53.969 00:05:53.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.969 suites 1 1 n/a 0 0 00:05:53.969 tests 3 3 3 0 0 00:05:53.969 asserts 8097 8097 8097 0 n/a 00:05:53.969 00:05:53.969 Elapsed time = 0.007 seconds 00:05:53.969 20:48:21 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:53.969 00:05:53.969 00:05:53.969 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.969 http://cunit.sourceforge.net/ 00:05:53.969 00:05:53.969 00:05:53.969 Suite: raid1 00:05:53.969 Test: test_raid1_start ...passed 00:05:53.969 Test: test_raid1_read_balancing ...passed 00:05:53.969 00:05:53.969 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.969 suites 1 1 n/a 0 0 00:05:53.969 tests 2 2 2 0 0 00:05:53.969 asserts 2856 2856 2856 0 n/a 00:05:53.969 00:05:53.970 Elapsed time = 0.004 seconds 00:05:53.970 20:48:21 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:53.970 00:05:53.970 00:05:53.970 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.970 http://cunit.sourceforge.net/ 00:05:53.970 00:05:53.970 00:05:53.970 Suite: zone 00:05:53.970 Test: test_zone_get_operation ...passed 00:05:53.970 Test: test_bdev_zone_get_info ...passed 00:05:53.970 Test: test_bdev_zone_management ...passed 00:05:53.970 Test: test_bdev_zone_append ...passed 00:05:53.970 Test: test_bdev_zone_append_with_md ...passed 00:05:53.970 Test: test_bdev_zone_appendv ...passed 00:05:53.970 Test: test_bdev_zone_appendv_with_md ...passed 00:05:53.970 Test: test_bdev_io_get_append_location ...passed 00:05:53.970 00:05:53.970 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.970 suites 1 1 n/a 0 0 00:05:53.970 tests 8 8 8 0 0 00:05:53.970 asserts 94 94 94 0 n/a 00:05:53.970 00:05:53.970 Elapsed time = 0.000 seconds 00:05:53.970 20:48:22 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:53.970 00:05:53.970 00:05:53.970 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.970 http://cunit.sourceforge.net/ 00:05:53.970 00:05:53.970 00:05:53.970 Suite: gpt_parse 00:05:53.970 Test: test_parse_mbr_and_primary ...[2024-06-09 20:48:22.033036] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:53.970 [2024-06-09 20:48:22.033316] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:53.970 [2024-06-09 20:48:22.033371] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:53.970 [2024-06-09 20:48:22.033445] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:53.970 [2024-06-09 20:48:22.033541] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:53.970 [2024-06-09 20:48:22.033637] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:53.970 passed 00:05:53.970 Test: test_parse_secondary ...[2024-06-09 20:48:22.034609] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:53.970 [2024-06-09 20:48:22.034678] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:53.970 [2024-06-09 20:48:22.034724] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:53.970 [2024-06-09 20:48:22.034764] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:53.970 passed 00:05:53.970 Test: test_check_mbr ...[2024-06-09 20:48:22.035518] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:53.970 [2024-06-09 20:48:22.035578] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:53.970 passed 00:05:53.970 Test: test_read_header ...[2024-06-09 20:48:22.035640] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:53.970 [2024-06-09 20:48:22.035730] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:53.970 [2024-06-09 20:48:22.035807] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:53.970 passed 00:05:53.970 Test: test_read_partitions ...[2024-06-09 20:48:22.035853] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:53.970 [2024-06-09 20:48:22.035895] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:53.970 [2024-06-09 20:48:22.035934] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:53.970 [2024-06-09 20:48:22.035999] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:53.970 [2024-06-09 20:48:22.036051] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:53.970 [2024-06-09 20:48:22.036097] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:53.970 [2024-06-09 20:48:22.036132] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:53.970 [2024-06-09 20:48:22.036521] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:53.970 passed 00:05:53.970 00:05:53.970 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.970 suites 1 1 n/a 0 0 00:05:53.970 tests 5 5 5 0 0 00:05:53.970 asserts 33 33 33 0 n/a 00:05:53.970 00:05:53.970 Elapsed time = 0.004 seconds 00:05:53.970 20:48:22 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:53.970 00:05:53.970 00:05:53.970 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.970 http://cunit.sourceforge.net/ 00:05:53.970 00:05:53.970 00:05:53.970 Suite: bdev_part 00:05:53.970 Test: part_test ...[2024-06-09 20:48:22.068644] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:53.970 passed 00:05:53.970 Test: part_free_test ...passed 00:05:53.970 Test: part_get_io_channel_test ...passed 00:05:53.970 Test: part_construct_ext ...passed 00:05:53.970 00:05:53.970 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.970 suites 1 1 n/a 0 0 00:05:53.970 tests 4 4 4 0 0 00:05:53.970 asserts 48 48 48 0 n/a 00:05:53.970 00:05:53.970 Elapsed time = 0.050 seconds 00:05:53.970 20:48:22 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:54.229 00:05:54.229 00:05:54.229 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.229 http://cunit.sourceforge.net/ 00:05:54.229 00:05:54.229 00:05:54.229 Suite: scsi_nvme_suite 00:05:54.229 Test: scsi_nvme_translate_test ...passed 00:05:54.229 00:05:54.229 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.229 suites 1 1 n/a 0 0 00:05:54.229 tests 1 1 1 0 0 00:05:54.229 asserts 104 104 104 0 n/a 00:05:54.229 00:05:54.229 Elapsed time = 0.000 seconds 00:05:54.229 20:48:22 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:54.229 00:05:54.229 00:05:54.229 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.229 http://cunit.sourceforge.net/ 00:05:54.229 00:05:54.229 00:05:54.229 Suite: lvol 00:05:54.229 Test: ut_lvs_init ...[2024-06-09 20:48:22.176481] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:54.229 [2024-06-09 20:48:22.176870] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:54.229 passed 00:05:54.229 Test: ut_lvol_init ...passed 00:05:54.229 Test: ut_lvol_snapshot ...passed 00:05:54.229 Test: ut_lvol_clone ...passed 00:05:54.229 Test: ut_lvs_destroy ...passed 00:05:54.229 Test: ut_lvs_unload ...passed 00:05:54.229 Test: ut_lvol_resize ...[2024-06-09 20:48:22.178003] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:54.229 passed 00:05:54.229 Test: ut_lvol_set_read_only ...passed 00:05:54.229 Test: ut_lvol_hotremove ...passed 00:05:54.229 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:54.229 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:54.229 Test: ut_lvol_read_write ...passed 00:05:54.229 Test: ut_vbdev_lvol_submit_request ...passed 00:05:54.229 Test: ut_lvol_examine_config ...passed 00:05:54.229 Test: ut_lvol_examine_disk ...[2024-06-09 20:48:22.178548] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:54.229 passed 00:05:54.229 Test: ut_lvol_rename ...[2024-06-09 20:48:22.179323] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:54.229 [2024-06-09 20:48:22.179413] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:54.229 passed 00:05:54.229 Test: ut_bdev_finish ...passed 00:05:54.229 Test: ut_lvs_rename ...passed 00:05:54.229 Test: ut_lvol_seek ...passed 00:05:54.229 Test: ut_esnap_dev_create ...passed 00:05:54.229 Test: ut_lvol_esnap_clone_bad_args ...[2024-06-09 20:48:22.179946] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:54.229 [2024-06-09 20:48:22.180007] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:54.229 [2024-06-09 20:48:22.180031] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:54.229 [2024-06-09 20:48:22.180075] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:54.229 [2024-06-09 20:48:22.180177] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:54.229 [2024-06-09 20:48:22.180205] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:54.229 passed 00:05:54.229 00:05:54.229 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.229 suites 1 1 n/a 0 0 00:05:54.229 tests 21 21 21 0 0 00:05:54.229 asserts 712 712 712 0 n/a 00:05:54.229 00:05:54.229 Elapsed time = 0.004 seconds 00:05:54.229 20:48:22 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:54.229 00:05:54.229 00:05:54.229 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.229 http://cunit.sourceforge.net/ 00:05:54.229 00:05:54.229 00:05:54.229 Suite: zone_block 00:05:54.229 Test: test_zone_block_create ...passed 00:05:54.229 Test: test_zone_block_create_invalid ...[2024-06-09 20:48:22.236329] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:54.229 [2024-06-09 20:48:22.236689] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-09 20:48:22.236906] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:54.229 [2024-06-09 20:48:22.236985] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-09 20:48:22.237182] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:54.230 [2024-06-09 20:48:22.237251] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-06-09 20:48:22.237361] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:54.230 [2024-06-09 20:48:22.237444] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:54.230 Test: test_get_zone_info ...[2024-06-09 20:48:22.238116] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.238209] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.238287] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 Test: test_supported_io_types ...passed 00:05:54.230 Test: test_reset_zone ...[2024-06-09 20:48:22.239183] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.239265] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 Test: test_open_zone ...[2024-06-09 20:48:22.239776] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.240518] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.240609] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 Test: test_zone_write ...[2024-06-09 20:48:22.241095] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:54.230 [2024-06-09 20:48:22.241174] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.241257] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:54.230 [2024-06-09 20:48:22.241322] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.247126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:54.230 [2024-06-09 20:48:22.247189] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.247285] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:54.230 [2024-06-09 20:48:22.247323] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.252911] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:54.230 [2024-06-09 20:48:22.252983] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 Test: test_zone_read ...[2024-06-09 20:48:22.253485] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:54.230 [2024-06-09 20:48:22.253578] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.253656] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:54.230 [2024-06-09 20:48:22.253693] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.254200] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:54.230 [2024-06-09 20:48:22.254270] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 Test: test_close_zone ...[2024-06-09 20:48:22.254666] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.254785] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.255032] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.255106] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 Test: test_finish_zone ...[2024-06-09 20:48:22.255838] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.255920] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 Test: test_append_zone ...[2024-06-09 20:48:22.256392] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:54.230 [2024-06-09 20:48:22.256470] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.256536] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:54.230 [2024-06-09 20:48:22.256586] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 [2024-06-09 20:48:22.267805] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:54.230 [2024-06-09 20:48:22.267876] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:54.230 passed 00:05:54.230 00:05:54.230 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.230 suites 1 1 n/a 0 0 00:05:54.230 tests 11 11 11 0 0 00:05:54.230 asserts 3437 3437 3437 0 n/a 00:05:54.230 00:05:54.230 Elapsed time = 0.033 seconds 00:05:54.230 20:48:22 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:54.230 00:05:54.230 00:05:54.230 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.230 http://cunit.sourceforge.net/ 00:05:54.230 00:05:54.230 00:05:54.230 Suite: bdev 00:05:54.230 Test: basic ...[2024-06-09 20:48:22.367654] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5569f05f7401): Operation not permitted (rc=-1) 00:05:54.230 [2024-06-09 20:48:22.368018] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x5569f05f73c0): Operation not permitted (rc=-1) 00:05:54.230 [2024-06-09 20:48:22.368105] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5569f05f7401): Operation not permitted (rc=-1) 00:05:54.230 passed 00:05:54.488 Test: unregister_and_close ...passed 00:05:54.489 Test: unregister_and_close_different_threads ...passed 00:05:54.489 Test: basic_qos ...passed 00:05:54.489 Test: put_channel_during_reset ...passed 00:05:54.489 Test: aborted_reset ...passed 00:05:54.489 Test: aborted_reset_no_outstanding_io ...passed 00:05:54.747 Test: io_during_reset ...passed 00:05:54.747 Test: reset_completions ...passed 00:05:54.747 Test: io_during_qos_queue ...passed 00:05:54.747 Test: io_during_qos_reset ...passed 00:05:54.747 Test: enomem ...passed 00:05:54.747 Test: enomem_multi_bdev ...passed 00:05:54.747 Test: enomem_multi_bdev_unregister ...passed 00:05:55.005 Test: enomem_multi_io_target ...passed 00:05:55.005 Test: qos_dynamic_enable ...passed 00:05:55.005 Test: bdev_histograms_mt ...passed 00:05:55.005 Test: bdev_set_io_timeout_mt ...[2024-06-09 20:48:23.037488] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:55.005 passed 00:05:55.005 Test: lock_lba_range_then_submit_io ...[2024-06-09 20:48:23.053168] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x5569f05f7380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:55.005 passed 00:05:55.005 Test: unregister_during_reset ...passed 00:05:55.005 Test: event_notify_and_close ...passed 00:05:55.005 Test: unregister_and_qos_poller ...passed 00:05:55.005 Suite: bdev_wrong_thread 00:05:55.005 Test: spdk_bdev_register_wt ...[2024-06-09 20:48:23.174162] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:05:55.005 passed 00:05:55.005 Test: spdk_bdev_examine_wt ...[2024-06-09 20:48:23.174512] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:05:55.005 passed 00:05:55.005 00:05:55.005 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.005 suites 2 2 n/a 0 0 00:05:55.005 tests 24 24 24 0 0 00:05:55.005 asserts 621 621 621 0 n/a 00:05:55.005 00:05:55.005 Elapsed time = 0.836 seconds 00:05:55.264 00:05:55.264 real 0m3.734s 00:05:55.264 user 0m1.627s 00:05:55.264 sys 0m2.101s 00:05:55.264 20:48:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.264 20:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.264 ************************************ 00:05:55.264 END TEST unittest_bdev 00:05:55.264 ************************************ 00:05:55.264 20:48:23 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:55.264 20:48:23 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:55.264 20:48:23 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:55.264 20:48:23 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:55.264 20:48:23 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:05:55.264 20:48:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.264 20:48:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.264 20:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.264 ************************************ 00:05:55.264 START TEST unittest_blob_blobfs 00:05:55.264 ************************************ 00:05:55.264 20:48:23 -- common/autotest_common.sh@1104 -- # unittest_blob 00:05:55.264 20:48:23 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:05:55.264 20:48:23 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:05:55.264 00:05:55.264 00:05:55.264 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.264 http://cunit.sourceforge.net/ 00:05:55.264 00:05:55.264 00:05:55.264 Suite: blob_nocopy_noextent 00:05:55.264 Test: blob_init ...[2024-06-09 20:48:23.293813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:55.265 passed 00:05:55.265 Test: blob_thin_provision ...passed 00:05:55.265 Test: blob_read_only ...passed 00:05:55.265 Test: bs_load ...[2024-06-09 20:48:23.393150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:55.265 passed 00:05:55.265 Test: bs_load_custom_cluster_size ...passed 00:05:55.265 Test: bs_load_after_failed_grow ...passed 00:05:55.265 Test: bs_cluster_sz ...[2024-06-09 20:48:23.428646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:55.265 [2024-06-09 20:48:23.429471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:55.265 [2024-06-09 20:48:23.430010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:55.523 passed 00:05:55.523 Test: bs_resize_md ...passed 00:05:55.523 Test: bs_destroy ...passed 00:05:55.523 Test: bs_type ...passed 00:05:55.523 Test: bs_super_block ...passed 00:05:55.523 Test: bs_test_recover_cluster_count ...passed 00:05:55.523 Test: bs_grow_live ...passed 00:05:55.523 Test: bs_grow_live_no_space ...passed 00:05:55.523 Test: bs_test_grow ...passed 00:05:55.523 Test: blob_serialize_test ...passed 00:05:55.523 Test: super_block_crc ...passed 00:05:55.523 Test: blob_thin_prov_write_count_io ...passed 00:05:55.523 Test: bs_load_iter_test ...passed 00:05:55.523 Test: blob_relations ...[2024-06-09 20:48:23.605246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.523 [2024-06-09 20:48:23.605593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.523 [2024-06-09 20:48:23.606654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.523 [2024-06-09 20:48:23.606877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.523 passed 00:05:55.523 Test: blob_relations2 ...[2024-06-09 20:48:23.621135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.523 [2024-06-09 20:48:23.621420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.523 [2024-06-09 20:48:23.621543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.523 [2024-06-09 20:48:23.621801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.523 [2024-06-09 20:48:23.623405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.524 [2024-06-09 20:48:23.623612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.524 [2024-06-09 20:48:23.624208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.524 [2024-06-09 20:48:23.624410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.524 passed 00:05:55.524 Test: blob_relations3 ...passed 00:05:55.781 Test: blobstore_clean_power_failure ...passed 00:05:55.781 Test: blob_delete_snapshot_power_failure ...[2024-06-09 20:48:23.768441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:55.781 [2024-06-09 20:48:23.780252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:55.781 [2024-06-09 20:48:23.780582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:55.782 [2024-06-09 20:48:23.780668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.782 [2024-06-09 20:48:23.792250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:55.782 [2024-06-09 20:48:23.792562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:55.782 [2024-06-09 20:48:23.792670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:55.782 [2024-06-09 20:48:23.792953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.782 [2024-06-09 20:48:23.804741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:55.782 [2024-06-09 20:48:23.805083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.782 [2024-06-09 20:48:23.816706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:55.782 [2024-06-09 20:48:23.817048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.782 [2024-06-09 20:48:23.828856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:55.782 [2024-06-09 20:48:23.829193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.782 passed 00:05:55.782 Test: blob_create_snapshot_power_failure ...[2024-06-09 20:48:23.867649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:55.782 [2024-06-09 20:48:23.891970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:55.782 [2024-06-09 20:48:23.904452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:55.782 passed 00:05:55.782 Test: blob_io_unit ...passed 00:05:56.040 Test: blob_io_unit_compatibility ...passed 00:05:56.040 Test: blob_ext_md_pages ...passed 00:05:56.040 Test: blob_esnap_io_4096_4096 ...passed 00:05:56.040 Test: blob_esnap_io_512_512 ...passed 00:05:56.040 Test: blob_esnap_io_4096_512 ...passed 00:05:56.040 Test: blob_esnap_io_512_4096 ...passed 00:05:56.040 Suite: blob_bs_nocopy_noextent 00:05:56.040 Test: blob_open ...passed 00:05:56.040 Test: blob_create ...[2024-06-09 20:48:24.144196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:56.040 passed 00:05:56.298 Test: blob_create_loop ...passed 00:05:56.298 Test: blob_create_fail ...[2024-06-09 20:48:24.249405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:56.298 passed 00:05:56.298 Test: blob_create_internal ...passed 00:05:56.298 Test: blob_create_zero_extent ...passed 00:05:56.298 Test: blob_snapshot ...passed 00:05:56.299 Test: blob_clone ...passed 00:05:56.299 Test: blob_inflate ...[2024-06-09 20:48:24.429955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:56.299 passed 00:05:56.557 Test: blob_delete ...passed 00:05:56.557 Test: blob_resize_test ...[2024-06-09 20:48:24.496185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:56.557 passed 00:05:56.557 Test: channel_ops ...passed 00:05:56.557 Test: blob_super ...passed 00:05:56.557 Test: blob_rw_verify_iov ...passed 00:05:56.557 Test: blob_unmap ...passed 00:05:56.557 Test: blob_iter ...passed 00:05:56.557 Test: blob_parse_md ...passed 00:05:56.815 Test: bs_load_pending_removal ...passed 00:05:56.815 Test: bs_unload ...[2024-06-09 20:48:24.753633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:56.815 passed 00:05:56.815 Test: bs_usable_clusters ...passed 00:05:56.815 Test: blob_crc ...[2024-06-09 20:48:24.817696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:56.815 [2024-06-09 20:48:24.818159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:56.815 passed 00:05:56.815 Test: blob_flags ...passed 00:05:56.815 Test: bs_version ...passed 00:05:56.815 Test: blob_set_xattrs_test ...[2024-06-09 20:48:24.923085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:56.815 [2024-06-09 20:48:24.923468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:56.815 passed 00:05:57.074 Test: blob_thin_prov_alloc ...passed 00:05:57.074 Test: blob_insert_cluster_msg_test ...passed 00:05:57.074 Test: blob_thin_prov_rw ...passed 00:05:57.074 Test: blob_thin_prov_rle ...passed 00:05:57.074 Test: blob_thin_prov_rw_iov ...passed 00:05:57.074 Test: blob_snapshot_rw ...passed 00:05:57.332 Test: blob_snapshot_rw_iov ...passed 00:05:57.332 Test: blob_inflate_rw ...passed 00:05:57.591 Test: blob_snapshot_freeze_io ...passed 00:05:57.591 Test: blob_operation_split_rw ...passed 00:05:57.849 Test: blob_operation_split_rw_iov ...passed 00:05:57.849 Test: blob_simultaneous_operations ...[2024-06-09 20:48:25.838405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:57.849 [2024-06-09 20:48:25.838778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:57.849 [2024-06-09 20:48:25.840087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:57.849 [2024-06-09 20:48:25.840276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:57.849 [2024-06-09 20:48:25.851955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:57.849 [2024-06-09 20:48:25.852191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:57.849 [2024-06-09 20:48:25.852403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:57.849 [2024-06-09 20:48:25.852612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:57.849 passed 00:05:57.849 Test: blob_persist_test ...passed 00:05:57.849 Test: blob_decouple_snapshot ...passed 00:05:57.849 Test: blob_seek_io_unit ...passed 00:05:58.108 Test: blob_nested_freezes ...passed 00:05:58.108 Suite: blob_blob_nocopy_noextent 00:05:58.108 Test: blob_write ...passed 00:05:58.108 Test: blob_read ...passed 00:05:58.108 Test: blob_rw_verify ...passed 00:05:58.108 Test: blob_rw_verify_iov_nomem ...passed 00:05:58.108 Test: blob_rw_iov_read_only ...passed 00:05:58.108 Test: blob_xattr ...passed 00:05:58.108 Test: blob_dirty_shutdown ...passed 00:05:58.367 Test: blob_is_degraded ...passed 00:05:58.367 Suite: blob_esnap_bs_nocopy_noextent 00:05:58.367 Test: blob_esnap_create ...passed 00:05:58.367 Test: blob_esnap_thread_add_remove ...passed 00:05:58.367 Test: blob_esnap_clone_snapshot ...passed 00:05:58.367 Test: blob_esnap_clone_inflate ...passed 00:05:58.367 Test: blob_esnap_clone_decouple ...passed 00:05:58.367 Test: blob_esnap_clone_reload ...passed 00:05:58.367 Test: blob_esnap_hotplug ...passed 00:05:58.367 Suite: blob_nocopy_extent 00:05:58.367 Test: blob_init ...[2024-06-09 20:48:26.524816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:58.367 passed 00:05:58.626 Test: blob_thin_provision ...passed 00:05:58.626 Test: blob_read_only ...passed 00:05:58.626 Test: bs_load ...[2024-06-09 20:48:26.568098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:58.626 passed 00:05:58.626 Test: bs_load_custom_cluster_size ...passed 00:05:58.626 Test: bs_load_after_failed_grow ...passed 00:05:58.626 Test: bs_cluster_sz ...[2024-06-09 20:48:26.593210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:58.626 [2024-06-09 20:48:26.593586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:58.626 [2024-06-09 20:48:26.593789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:58.626 passed 00:05:58.626 Test: bs_resize_md ...passed 00:05:58.626 Test: bs_destroy ...passed 00:05:58.626 Test: bs_type ...passed 00:05:58.626 Test: bs_super_block ...passed 00:05:58.626 Test: bs_test_recover_cluster_count ...passed 00:05:58.626 Test: bs_grow_live ...passed 00:05:58.626 Test: bs_grow_live_no_space ...passed 00:05:58.626 Test: bs_test_grow ...passed 00:05:58.626 Test: blob_serialize_test ...passed 00:05:58.626 Test: super_block_crc ...passed 00:05:58.626 Test: blob_thin_prov_write_count_io ...passed 00:05:58.626 Test: bs_load_iter_test ...passed 00:05:58.626 Test: blob_relations ...[2024-06-09 20:48:26.765064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:58.626 [2024-06-09 20:48:26.765411] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.626 [2024-06-09 20:48:26.766872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:58.626 [2024-06-09 20:48:26.767112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.626 passed 00:05:58.626 Test: blob_relations2 ...[2024-06-09 20:48:26.782291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:58.626 [2024-06-09 20:48:26.782550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.626 [2024-06-09 20:48:26.782632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:58.626 [2024-06-09 20:48:26.782788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.626 [2024-06-09 20:48:26.784358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:58.626 [2024-06-09 20:48:26.784546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.626 [2024-06-09 20:48:26.785037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:58.626 [2024-06-09 20:48:26.785202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.626 passed 00:05:58.885 Test: blob_relations3 ...passed 00:05:58.885 Test: blobstore_clean_power_failure ...passed 00:05:58.885 Test: blob_delete_snapshot_power_failure ...[2024-06-09 20:48:26.937460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:58.885 [2024-06-09 20:48:26.951282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:58.885 [2024-06-09 20:48:26.963917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:58.885 [2024-06-09 20:48:26.964227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:58.885 [2024-06-09 20:48:26.964297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.885 [2024-06-09 20:48:26.977032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:58.885 [2024-06-09 20:48:26.977363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:58.885 [2024-06-09 20:48:26.977443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:58.885 [2024-06-09 20:48:26.977637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.885 [2024-06-09 20:48:26.992816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:58.885 [2024-06-09 20:48:26.993176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:58.885 [2024-06-09 20:48:26.993277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:58.885 [2024-06-09 20:48:26.993650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.885 [2024-06-09 20:48:27.008851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:58.885 [2024-06-09 20:48:27.009182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.885 [2024-06-09 20:48:27.024499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:58.885 [2024-06-09 20:48:27.024873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.885 [2024-06-09 20:48:27.038731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:58.885 [2024-06-09 20:48:27.039026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:59.143 passed 00:05:59.144 Test: blob_create_snapshot_power_failure ...[2024-06-09 20:48:27.074262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:59.144 [2024-06-09 20:48:27.085800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:59.144 [2024-06-09 20:48:27.107803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:59.144 [2024-06-09 20:48:27.122135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:59.144 passed 00:05:59.144 Test: blob_io_unit ...passed 00:05:59.144 Test: blob_io_unit_compatibility ...passed 00:05:59.144 Test: blob_ext_md_pages ...passed 00:05:59.144 Test: blob_esnap_io_4096_4096 ...passed 00:05:59.144 Test: blob_esnap_io_512_512 ...passed 00:05:59.144 Test: blob_esnap_io_4096_512 ...passed 00:05:59.144 Test: blob_esnap_io_512_4096 ...passed 00:05:59.144 Suite: blob_bs_nocopy_extent 00:05:59.402 Test: blob_open ...passed 00:05:59.402 Test: blob_create ...[2024-06-09 20:48:27.354228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:59.402 passed 00:05:59.403 Test: blob_create_loop ...passed 00:05:59.403 Test: blob_create_fail ...[2024-06-09 20:48:27.464424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:59.403 passed 00:05:59.403 Test: blob_create_internal ...passed 00:05:59.403 Test: blob_create_zero_extent ...passed 00:05:59.403 Test: blob_snapshot ...passed 00:05:59.660 Test: blob_clone ...passed 00:05:59.660 Test: blob_inflate ...[2024-06-09 20:48:27.635883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:59.660 passed 00:05:59.660 Test: blob_delete ...passed 00:05:59.661 Test: blob_resize_test ...[2024-06-09 20:48:27.703316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:59.661 passed 00:05:59.661 Test: channel_ops ...passed 00:05:59.661 Test: blob_super ...passed 00:05:59.661 Test: blob_rw_verify_iov ...passed 00:05:59.918 Test: blob_unmap ...passed 00:05:59.918 Test: blob_iter ...passed 00:05:59.918 Test: blob_parse_md ...passed 00:05:59.918 Test: bs_load_pending_removal ...passed 00:05:59.918 Test: bs_unload ...[2024-06-09 20:48:27.954210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:59.918 passed 00:05:59.918 Test: bs_usable_clusters ...passed 00:05:59.918 Test: blob_crc ...[2024-06-09 20:48:28.029044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:59.918 [2024-06-09 20:48:28.029434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:59.918 passed 00:05:59.918 Test: blob_flags ...passed 00:06:00.176 Test: bs_version ...passed 00:06:00.176 Test: blob_set_xattrs_test ...[2024-06-09 20:48:28.141190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:00.176 [2024-06-09 20:48:28.141570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:00.176 passed 00:06:00.176 Test: blob_thin_prov_alloc ...passed 00:06:00.176 Test: blob_insert_cluster_msg_test ...passed 00:06:00.176 Test: blob_thin_prov_rw ...passed 00:06:00.434 Test: blob_thin_prov_rle ...passed 00:06:00.434 Test: blob_thin_prov_rw_iov ...passed 00:06:00.434 Test: blob_snapshot_rw ...passed 00:06:00.434 Test: blob_snapshot_rw_iov ...passed 00:06:00.692 Test: blob_inflate_rw ...passed 00:06:00.692 Test: blob_snapshot_freeze_io ...passed 00:06:00.692 Test: blob_operation_split_rw ...passed 00:06:00.951 Test: blob_operation_split_rw_iov ...passed 00:06:00.951 Test: blob_simultaneous_operations ...[2024-06-09 20:48:29.010174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:00.951 [2024-06-09 20:48:29.010541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.951 [2024-06-09 20:48:29.011617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:00.951 [2024-06-09 20:48:29.011834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.951 [2024-06-09 20:48:29.021345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:00.951 [2024-06-09 20:48:29.021574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.951 [2024-06-09 20:48:29.021726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:00.951 [2024-06-09 20:48:29.021923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.951 passed 00:06:00.951 Test: blob_persist_test ...passed 00:06:01.209 Test: blob_decouple_snapshot ...passed 00:06:01.209 Test: blob_seek_io_unit ...passed 00:06:01.209 Test: blob_nested_freezes ...passed 00:06:01.209 Suite: blob_blob_nocopy_extent 00:06:01.209 Test: blob_write ...passed 00:06:01.209 Test: blob_read ...passed 00:06:01.209 Test: blob_rw_verify ...passed 00:06:01.209 Test: blob_rw_verify_iov_nomem ...passed 00:06:01.209 Test: blob_rw_iov_read_only ...passed 00:06:01.467 Test: blob_xattr ...passed 00:06:01.467 Test: blob_dirty_shutdown ...passed 00:06:01.467 Test: blob_is_degraded ...passed 00:06:01.467 Suite: blob_esnap_bs_nocopy_extent 00:06:01.467 Test: blob_esnap_create ...passed 00:06:01.467 Test: blob_esnap_thread_add_remove ...passed 00:06:01.467 Test: blob_esnap_clone_snapshot ...passed 00:06:01.467 Test: blob_esnap_clone_inflate ...passed 00:06:01.467 Test: blob_esnap_clone_decouple ...passed 00:06:01.726 Test: blob_esnap_clone_reload ...passed 00:06:01.726 Test: blob_esnap_hotplug ...passed 00:06:01.726 Suite: blob_copy_noextent 00:06:01.726 Test: blob_init ...[2024-06-09 20:48:29.702179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:01.726 passed 00:06:01.726 Test: blob_thin_provision ...passed 00:06:01.726 Test: blob_read_only ...passed 00:06:01.726 Test: bs_load ...[2024-06-09 20:48:29.745273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:01.726 passed 00:06:01.726 Test: bs_load_custom_cluster_size ...passed 00:06:01.726 Test: bs_load_after_failed_grow ...passed 00:06:01.726 Test: bs_cluster_sz ...[2024-06-09 20:48:29.768647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:01.726 [2024-06-09 20:48:29.768889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:01.726 [2024-06-09 20:48:29.769035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:01.726 passed 00:06:01.726 Test: bs_resize_md ...passed 00:06:01.726 Test: bs_destroy ...passed 00:06:01.726 Test: bs_type ...passed 00:06:01.726 Test: bs_super_block ...passed 00:06:01.726 Test: bs_test_recover_cluster_count ...passed 00:06:01.726 Test: bs_grow_live ...passed 00:06:01.726 Test: bs_grow_live_no_space ...passed 00:06:01.726 Test: bs_test_grow ...passed 00:06:01.726 Test: blob_serialize_test ...passed 00:06:01.726 Test: super_block_crc ...passed 00:06:01.726 Test: blob_thin_prov_write_count_io ...passed 00:06:01.726 Test: bs_load_iter_test ...passed 00:06:01.988 Test: blob_relations ...[2024-06-09 20:48:29.905224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:01.988 [2024-06-09 20:48:29.905502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:29.906176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:01.988 [2024-06-09 20:48:29.906330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 passed 00:06:01.988 Test: blob_relations2 ...[2024-06-09 20:48:29.918351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:01.988 [2024-06-09 20:48:29.918586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:29.918659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:01.988 [2024-06-09 20:48:29.918764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:29.919706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:01.988 [2024-06-09 20:48:29.919877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:29.920196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:01.988 [2024-06-09 20:48:29.920329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 passed 00:06:01.988 Test: blob_relations3 ...passed 00:06:01.988 Test: blobstore_clean_power_failure ...passed 00:06:01.988 Test: blob_delete_snapshot_power_failure ...[2024-06-09 20:48:30.056284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:01.988 [2024-06-09 20:48:30.066974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:01.988 [2024-06-09 20:48:30.067277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:01.988 [2024-06-09 20:48:30.067346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:30.077943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:01.988 [2024-06-09 20:48:30.078210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:01.988 [2024-06-09 20:48:30.078275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:01.988 [2024-06-09 20:48:30.078385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:30.089068] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:01.988 [2024-06-09 20:48:30.089359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:30.099994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:01.988 [2024-06-09 20:48:30.100293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 [2024-06-09 20:48:30.111093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:01.988 [2024-06-09 20:48:30.111361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:01.988 passed 00:06:01.988 Test: blob_create_snapshot_power_failure ...[2024-06-09 20:48:30.142972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:02.254 [2024-06-09 20:48:30.165213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:02.254 [2024-06-09 20:48:30.178895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:02.254 passed 00:06:02.254 Test: blob_io_unit ...passed 00:06:02.254 Test: blob_io_unit_compatibility ...passed 00:06:02.254 Test: blob_ext_md_pages ...passed 00:06:02.254 Test: blob_esnap_io_4096_4096 ...passed 00:06:02.254 Test: blob_esnap_io_512_512 ...passed 00:06:02.254 Test: blob_esnap_io_4096_512 ...passed 00:06:02.254 Test: blob_esnap_io_512_4096 ...passed 00:06:02.254 Suite: blob_bs_copy_noextent 00:06:02.254 Test: blob_open ...passed 00:06:02.254 Test: blob_create ...[2024-06-09 20:48:30.402821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:02.254 passed 00:06:02.513 Test: blob_create_loop ...passed 00:06:02.513 Test: blob_create_fail ...[2024-06-09 20:48:30.487384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:02.513 passed 00:06:02.513 Test: blob_create_internal ...passed 00:06:02.513 Test: blob_create_zero_extent ...passed 00:06:02.513 Test: blob_snapshot ...passed 00:06:02.513 Test: blob_clone ...passed 00:06:02.513 Test: blob_inflate ...[2024-06-09 20:48:30.644070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:02.513 passed 00:06:02.513 Test: blob_delete ...passed 00:06:02.771 Test: blob_resize_test ...[2024-06-09 20:48:30.703128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:02.771 passed 00:06:02.771 Test: channel_ops ...passed 00:06:02.771 Test: blob_super ...passed 00:06:02.771 Test: blob_rw_verify_iov ...passed 00:06:02.771 Test: blob_unmap ...passed 00:06:02.771 Test: blob_iter ...passed 00:06:02.771 Test: blob_parse_md ...passed 00:06:02.771 Test: bs_load_pending_removal ...passed 00:06:03.029 Test: bs_unload ...[2024-06-09 20:48:30.960261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:03.029 passed 00:06:03.029 Test: bs_usable_clusters ...passed 00:06:03.029 Test: blob_crc ...[2024-06-09 20:48:31.019148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:03.029 [2024-06-09 20:48:31.019519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:03.029 passed 00:06:03.029 Test: blob_flags ...passed 00:06:03.029 Test: bs_version ...passed 00:06:03.030 Test: blob_set_xattrs_test ...[2024-06-09 20:48:31.109417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.030 [2024-06-09 20:48:31.109791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.030 passed 00:06:03.288 Test: blob_thin_prov_alloc ...passed 00:06:03.288 Test: blob_insert_cluster_msg_test ...passed 00:06:03.288 Test: blob_thin_prov_rw ...passed 00:06:03.288 Test: blob_thin_prov_rle ...passed 00:06:03.288 Test: blob_thin_prov_rw_iov ...passed 00:06:03.288 Test: blob_snapshot_rw ...passed 00:06:03.288 Test: blob_snapshot_rw_iov ...passed 00:06:03.546 Test: blob_inflate_rw ...passed 00:06:03.546 Test: blob_snapshot_freeze_io ...passed 00:06:03.804 Test: blob_operation_split_rw ...passed 00:06:03.804 Test: blob_operation_split_rw_iov ...passed 00:06:04.069 Test: blob_simultaneous_operations ...[2024-06-09 20:48:31.998240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:04.069 [2024-06-09 20:48:31.998560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.069 [2024-06-09 20:48:31.999076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:04.069 [2024-06-09 20:48:31.999279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.069 [2024-06-09 20:48:32.001998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:04.069 [2024-06-09 20:48:32.002253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.069 [2024-06-09 20:48:32.002389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:04.069 [2024-06-09 20:48:32.002606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.069 passed 00:06:04.069 Test: blob_persist_test ...passed 00:06:04.069 Test: blob_decouple_snapshot ...passed 00:06:04.069 Test: blob_seek_io_unit ...passed 00:06:04.069 Test: blob_nested_freezes ...passed 00:06:04.069 Suite: blob_blob_copy_noextent 00:06:04.069 Test: blob_write ...passed 00:06:04.069 Test: blob_read ...passed 00:06:04.330 Test: blob_rw_verify ...passed 00:06:04.330 Test: blob_rw_verify_iov_nomem ...passed 00:06:04.330 Test: blob_rw_iov_read_only ...passed 00:06:04.330 Test: blob_xattr ...passed 00:06:04.330 Test: blob_dirty_shutdown ...passed 00:06:04.330 Test: blob_is_degraded ...passed 00:06:04.330 Suite: blob_esnap_bs_copy_noextent 00:06:04.330 Test: blob_esnap_create ...passed 00:06:04.589 Test: blob_esnap_thread_add_remove ...passed 00:06:04.589 Test: blob_esnap_clone_snapshot ...passed 00:06:04.589 Test: blob_esnap_clone_inflate ...passed 00:06:04.589 Test: blob_esnap_clone_decouple ...passed 00:06:04.589 Test: blob_esnap_clone_reload ...passed 00:06:04.589 Test: blob_esnap_hotplug ...passed 00:06:04.589 Suite: blob_copy_extent 00:06:04.589 Test: blob_init ...[2024-06-09 20:48:32.707883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:04.589 passed 00:06:04.589 Test: blob_thin_provision ...passed 00:06:04.589 Test: blob_read_only ...passed 00:06:04.589 Test: bs_load ...[2024-06-09 20:48:32.759981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:04.589 passed 00:06:04.847 Test: bs_load_custom_cluster_size ...passed 00:06:04.847 Test: bs_load_after_failed_grow ...passed 00:06:04.847 Test: bs_cluster_sz ...[2024-06-09 20:48:32.788754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:04.847 [2024-06-09 20:48:32.789005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:04.847 [2024-06-09 20:48:32.789182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:04.847 passed 00:06:04.847 Test: bs_resize_md ...passed 00:06:04.847 Test: bs_destroy ...passed 00:06:04.847 Test: bs_type ...passed 00:06:04.847 Test: bs_super_block ...passed 00:06:04.847 Test: bs_test_recover_cluster_count ...passed 00:06:04.847 Test: bs_grow_live ...passed 00:06:04.847 Test: bs_grow_live_no_space ...passed 00:06:04.847 Test: bs_test_grow ...passed 00:06:04.847 Test: blob_serialize_test ...passed 00:06:04.847 Test: super_block_crc ...passed 00:06:04.848 Test: blob_thin_prov_write_count_io ...passed 00:06:04.848 Test: bs_load_iter_test ...passed 00:06:04.848 Test: blob_relations ...[2024-06-09 20:48:32.951163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.848 [2024-06-09 20:48:32.951544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.848 [2024-06-09 20:48:32.952515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.848 [2024-06-09 20:48:32.952726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.848 passed 00:06:04.848 Test: blob_relations2 ...[2024-06-09 20:48:32.967855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.848 [2024-06-09 20:48:32.968213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.848 [2024-06-09 20:48:32.968301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.848 [2024-06-09 20:48:32.968548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.848 [2024-06-09 20:48:32.970102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.848 [2024-06-09 20:48:32.970316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.848 [2024-06-09 20:48:32.970772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:04.848 [2024-06-09 20:48:32.970963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:04.848 passed 00:06:04.848 Test: blob_relations3 ...passed 00:06:05.106 Test: blobstore_clean_power_failure ...passed 00:06:05.106 Test: blob_delete_snapshot_power_failure ...[2024-06-09 20:48:33.139468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:05.106 [2024-06-09 20:48:33.153780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:05.106 [2024-06-09 20:48:33.168216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:05.106 [2024-06-09 20:48:33.168602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:05.106 [2024-06-09 20:48:33.168673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.106 [2024-06-09 20:48:33.185288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:05.106 [2024-06-09 20:48:33.185689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:05.106 [2024-06-09 20:48:33.185775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:05.107 [2024-06-09 20:48:33.185922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.107 [2024-06-09 20:48:33.199008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:05.107 [2024-06-09 20:48:33.199358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:05.107 [2024-06-09 20:48:33.199419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:05.107 [2024-06-09 20:48:33.199540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.107 [2024-06-09 20:48:33.212695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:05.107 [2024-06-09 20:48:33.213073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.107 [2024-06-09 20:48:33.226488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:05.107 [2024-06-09 20:48:33.226850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.107 [2024-06-09 20:48:33.239920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:05.107 [2024-06-09 20:48:33.240267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.107 passed 00:06:05.107 Test: blob_create_snapshot_power_failure ...[2024-06-09 20:48:33.279140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:05.365 [2024-06-09 20:48:33.291968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:05.365 [2024-06-09 20:48:33.317682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:05.365 [2024-06-09 20:48:33.331488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:05.365 passed 00:06:05.365 Test: blob_io_unit ...passed 00:06:05.365 Test: blob_io_unit_compatibility ...passed 00:06:05.365 Test: blob_ext_md_pages ...passed 00:06:05.365 Test: blob_esnap_io_4096_4096 ...passed 00:06:05.365 Test: blob_esnap_io_512_512 ...passed 00:06:05.365 Test: blob_esnap_io_4096_512 ...passed 00:06:05.365 Test: blob_esnap_io_512_4096 ...passed 00:06:05.365 Suite: blob_bs_copy_extent 00:06:05.624 Test: blob_open ...passed 00:06:05.624 Test: blob_create ...[2024-06-09 20:48:33.586912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:05.624 passed 00:06:05.624 Test: blob_create_loop ...passed 00:06:05.624 Test: blob_create_fail ...[2024-06-09 20:48:33.692278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:05.624 passed 00:06:05.624 Test: blob_create_internal ...passed 00:06:05.624 Test: blob_create_zero_extent ...passed 00:06:05.882 Test: blob_snapshot ...passed 00:06:05.882 Test: blob_clone ...passed 00:06:05.882 Test: blob_inflate ...[2024-06-09 20:48:33.874365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:05.882 passed 00:06:05.882 Test: blob_delete ...passed 00:06:05.882 Test: blob_resize_test ...[2024-06-09 20:48:33.941780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:05.882 passed 00:06:05.883 Test: channel_ops ...passed 00:06:05.883 Test: blob_super ...passed 00:06:05.883 Test: blob_rw_verify_iov ...passed 00:06:06.141 Test: blob_unmap ...passed 00:06:06.141 Test: blob_iter ...passed 00:06:06.141 Test: blob_parse_md ...passed 00:06:06.141 Test: bs_load_pending_removal ...passed 00:06:06.141 Test: bs_unload ...[2024-06-09 20:48:34.210286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:06.141 passed 00:06:06.141 Test: bs_usable_clusters ...passed 00:06:06.141 Test: blob_crc ...[2024-06-09 20:48:34.274253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:06.141 [2024-06-09 20:48:34.274669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:06.141 passed 00:06:06.141 Test: blob_flags ...passed 00:06:06.401 Test: bs_version ...passed 00:06:06.401 Test: blob_set_xattrs_test ...[2024-06-09 20:48:34.367707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:06.401 [2024-06-09 20:48:34.368070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:06.401 passed 00:06:06.401 Test: blob_thin_prov_alloc ...passed 00:06:06.401 Test: blob_insert_cluster_msg_test ...passed 00:06:06.401 Test: blob_thin_prov_rw ...passed 00:06:06.659 Test: blob_thin_prov_rle ...passed 00:06:06.659 Test: blob_thin_prov_rw_iov ...passed 00:06:06.659 Test: blob_snapshot_rw ...passed 00:06:06.659 Test: blob_snapshot_rw_iov ...passed 00:06:06.918 Test: blob_inflate_rw ...passed 00:06:06.918 Test: blob_snapshot_freeze_io ...passed 00:06:06.918 Test: blob_operation_split_rw ...passed 00:06:07.177 Test: blob_operation_split_rw_iov ...passed 00:06:07.177 Test: blob_simultaneous_operations ...[2024-06-09 20:48:35.202492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:07.177 [2024-06-09 20:48:35.202847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:07.177 [2024-06-09 20:48:35.203306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:07.177 [2024-06-09 20:48:35.203474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:07.177 [2024-06-09 20:48:35.205836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:07.177 [2024-06-09 20:48:35.206066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:07.177 [2024-06-09 20:48:35.206204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:07.177 [2024-06-09 20:48:35.206345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:07.177 passed 00:06:07.177 Test: blob_persist_test ...passed 00:06:07.177 Test: blob_decouple_snapshot ...passed 00:06:07.177 Test: blob_seek_io_unit ...passed 00:06:07.436 Test: blob_nested_freezes ...passed 00:06:07.436 Suite: blob_blob_copy_extent 00:06:07.436 Test: blob_write ...passed 00:06:07.436 Test: blob_read ...passed 00:06:07.436 Test: blob_rw_verify ...passed 00:06:07.436 Test: blob_rw_verify_iov_nomem ...passed 00:06:07.436 Test: blob_rw_iov_read_only ...passed 00:06:07.436 Test: blob_xattr ...passed 00:06:07.436 Test: blob_dirty_shutdown ...passed 00:06:07.695 Test: blob_is_degraded ...passed 00:06:07.695 Suite: blob_esnap_bs_copy_extent 00:06:07.695 Test: blob_esnap_create ...passed 00:06:07.695 Test: blob_esnap_thread_add_remove ...passed 00:06:07.695 Test: blob_esnap_clone_snapshot ...passed 00:06:07.695 Test: blob_esnap_clone_inflate ...passed 00:06:07.695 Test: blob_esnap_clone_decouple ...passed 00:06:07.695 Test: blob_esnap_clone_reload ...passed 00:06:07.695 Test: blob_esnap_hotplug ...passed 00:06:07.695 00:06:07.695 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.695 suites 16 16 n/a 0 0 00:06:07.695 tests 348 348 348 0 0 00:06:07.695 asserts 92605 92605 92605 0 n/a 00:06:07.695 00:06:07.695 Elapsed time = 12.418 seconds 00:06:07.955 20:48:35 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:07.955 00:06:07.955 00:06:07.955 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.955 http://cunit.sourceforge.net/ 00:06:07.955 00:06:07.955 00:06:07.955 Suite: blob_bdev 00:06:07.955 Test: create_bs_dev ...passed 00:06:07.955 Test: create_bs_dev_ro ...[2024-06-09 20:48:35.935382] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:07.955 passed 00:06:07.955 Test: create_bs_dev_rw ...passed 00:06:07.955 Test: claim_bs_dev ...[2024-06-09 20:48:35.936464] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:07.955 passed 00:06:07.955 Test: claim_bs_dev_ro ...passed 00:06:07.955 Test: deferred_destroy_refs ...passed 00:06:07.955 Test: deferred_destroy_channels ...passed 00:06:07.955 Test: deferred_destroy_threads ...passed 00:06:07.955 00:06:07.955 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.955 suites 1 1 n/a 0 0 00:06:07.955 tests 8 8 8 0 0 00:06:07.955 asserts 119 119 119 0 n/a 00:06:07.955 00:06:07.955 Elapsed time = 0.001 seconds 00:06:07.955 20:48:35 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:07.955 00:06:07.955 00:06:07.955 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.955 http://cunit.sourceforge.net/ 00:06:07.955 00:06:07.955 00:06:07.955 Suite: tree 00:06:07.955 Test: blobfs_tree_op_test ...passed 00:06:07.955 00:06:07.955 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.955 suites 1 1 n/a 0 0 00:06:07.955 tests 1 1 1 0 0 00:06:07.955 asserts 27 27 27 0 n/a 00:06:07.955 00:06:07.955 Elapsed time = 0.000 seconds 00:06:07.955 20:48:35 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:07.955 00:06:07.955 00:06:07.955 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.955 http://cunit.sourceforge.net/ 00:06:07.955 00:06:07.955 00:06:07.955 Suite: blobfs_async_ut 00:06:07.955 Test: fs_init ...passed 00:06:07.955 Test: fs_open ...passed 00:06:07.955 Test: fs_create ...passed 00:06:07.955 Test: fs_truncate ...passed 00:06:07.955 Test: fs_rename ...[2024-06-09 20:48:36.102343] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:07.955 passed 00:06:07.955 Test: fs_rw_async ...passed 00:06:07.955 Test: fs_writev_readv_async ...passed 00:06:07.955 Test: tree_find_buffer_ut ...passed 00:06:08.214 Test: channel_ops ...passed 00:06:08.214 Test: channel_ops_sync ...passed 00:06:08.214 00:06:08.214 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.214 suites 1 1 n/a 0 0 00:06:08.214 tests 10 10 10 0 0 00:06:08.214 asserts 292 292 292 0 n/a 00:06:08.214 00:06:08.214 Elapsed time = 0.148 seconds 00:06:08.214 20:48:36 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:08.214 00:06:08.214 00:06:08.214 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.214 http://cunit.sourceforge.net/ 00:06:08.214 00:06:08.214 00:06:08.214 Suite: blobfs_sync_ut 00:06:08.214 Test: cache_read_after_write ...[2024-06-09 20:48:36.269975] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:08.214 passed 00:06:08.214 Test: file_length ...passed 00:06:08.214 Test: append_write_to_extend_blob ...passed 00:06:08.214 Test: partial_buffer ...passed 00:06:08.214 Test: cache_write_null_buffer ...passed 00:06:08.214 Test: fs_create_sync ...passed 00:06:08.214 Test: fs_rename_sync ...passed 00:06:08.214 Test: cache_append_no_cache ...passed 00:06:08.473 Test: fs_delete_file_without_close ...passed 00:06:08.473 00:06:08.473 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.473 suites 1 1 n/a 0 0 00:06:08.473 tests 9 9 9 0 0 00:06:08.473 asserts 345 345 345 0 n/a 00:06:08.473 00:06:08.473 Elapsed time = 0.367 seconds 00:06:08.473 20:48:36 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:08.473 00:06:08.473 00:06:08.473 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.473 http://cunit.sourceforge.net/ 00:06:08.473 00:06:08.473 00:06:08.473 Suite: blobfs_bdev_ut 00:06:08.473 Test: spdk_blobfs_bdev_detect_test ...[2024-06-09 20:48:36.455201] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:08.473 passed 00:06:08.473 Test: spdk_blobfs_bdev_create_test ...[2024-06-09 20:48:36.456114] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:08.473 passed 00:06:08.473 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:08.473 00:06:08.473 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.473 suites 1 1 n/a 0 0 00:06:08.473 tests 3 3 3 0 0 00:06:08.473 asserts 9 9 9 0 n/a 00:06:08.473 00:06:08.473 Elapsed time = 0.001 seconds 00:06:08.473 00:06:08.473 real 0m13.202s 00:06:08.473 user 0m12.572s 00:06:08.473 sys 0m0.692s 00:06:08.473 20:48:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.473 20:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.473 ************************************ 00:06:08.473 END TEST unittest_blob_blobfs 00:06:08.473 ************************************ 00:06:08.473 20:48:36 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:06:08.473 20:48:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.473 20:48:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.473 20:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.473 ************************************ 00:06:08.473 START TEST unittest_event 00:06:08.473 ************************************ 00:06:08.473 20:48:36 -- common/autotest_common.sh@1104 -- # unittest_event 00:06:08.473 20:48:36 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:08.473 00:06:08.473 00:06:08.473 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.473 http://cunit.sourceforge.net/ 00:06:08.473 00:06:08.473 00:06:08.473 Suite: app_suite 00:06:08.473 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:08.473 options: 00:06:08.473 -c, --config JSON config file (default none) 00:06:08.473 --json JSON config file (default none) 00:06:08.473 --json-ignore-init-errors 00:06:08.473 don't exit on invalid config entry 00:06:08.473 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:08.473 -g, --single-file-segments 00:06:08.473 force creating just one hugetlbfs file 00:06:08.473 -h, --help show this usage 00:06:08.473 -i, --shm-id shared memory ID (optional) 00:06:08.473 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:08.473 --lcores lcore to CPU mapping list. The list is in the format: 00:06:08.473 [<,lcores[@CPUs]>...] 00:06:08.473 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:08.473 Within the group, '-' is used for range separator, 00:06:08.473 ',' is used for single number separator. 00:06:08.473 '( )' can be omitted for single element group, 00:06:08.473 '@' can be omitted if cpus and lcores have the same value 00:06:08.473 -n, --mem-channels channel number of memory channels used for DPDK 00:06:08.473 -p, --main-core main (primary) core for DPDK 00:06:08.474 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:08.474 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:08.474 --disable-cpumask-locks Disable CPU core lock files. 00:06:08.474 --silence-noticelog disable notice level logging to stderr 00:06:08.474 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:08.474 app_ut: invalid option -- 'z' 00:06:08.474 -u, --no-pci disable PCI access 00:06:08.474 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:08.474 --max-delay maximum reactor delay (in microseconds) 00:06:08.474 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:08.474 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:08.474 -R, --huge-unlink unlink huge files after initialization 00:06:08.474 -v, --version print SPDK version 00:06:08.474 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:08.474 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:08.474 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:08.474 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:08.474 Tracepoints vary in size and can use more than one trace entry. 00:06:08.474 --rpcs-allowed comma-separated list of permitted RPCS 00:06:08.474 --env-context Opaque context for use of the env implementation 00:06:08.474 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:08.474 --no-huge run without using hugepages 00:06:08.474 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:08.474 -e, --tpoint-group [:] 00:06:08.474 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:08.474 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:08.474 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:08.474 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:08.474 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:08.474 app_ut [options] 00:06:08.474 options: 00:06:08.474 -c, --config JSON config file (default none) 00:06:08.474 --json JSON config file (default none) 00:06:08.474 --json-ignore-init-errors 00:06:08.474 don't exit on invalid config entry 00:06:08.474 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:08.474 -g, --single-file-segments 00:06:08.474 force creating just one hugetlbfs file 00:06:08.474 -h, --help show this usage 00:06:08.474 -i, --shm-id shared memory ID (optional) 00:06:08.474 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:08.474 --lcores lcore to CPU mapping list. The list is in the format: 00:06:08.474 [<,lcores[@CPUs]>...] 00:06:08.474 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:08.474 Within the group, '-' is used for range separator, 00:06:08.474 ',' is used for single number separator. 00:06:08.474 '( )' can be omitted for single element group, 00:06:08.474 '@' can be omitted if cpus and lcores have the same value 00:06:08.474 -n, --mem-channels channel number of memory channels used for DPDK 00:06:08.474 -p, --main-core main (primary) core for DPDK 00:06:08.474 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:08.474 app_ut: unrecognized option '--test-long-opt' 00:06:08.474 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:08.474 --disable-cpumask-locks Disable CPU core lock files. 00:06:08.474 --silence-noticelog disable notice level logging to stderr 00:06:08.474 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:08.474 -u, --no-pci disable PCI access 00:06:08.474 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:08.474 --max-delay maximum reactor delay (in microseconds) 00:06:08.474 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:08.474 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:08.474 -R, --huge-unlink unlink huge files after initialization 00:06:08.474 -v, --version print SPDK version 00:06:08.474 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:08.474 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:08.474 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:08.474 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:08.474 Tracepoints vary in size and can use more than one trace entry. 00:06:08.474 --rpcs-allowed comma-separated list of permitted RPCS 00:06:08.474 --env-context Opaque context for use of the env implementation 00:06:08.474 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:08.474 --no-huge run without using hugepages 00:06:08.474 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:08.474 -e, --tpoint-group [:] 00:06:08.474 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:08.474 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:08.474 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:08.474 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:08.474 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:08.474 [2024-06-09 20:48:36.532738] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:08.474 [2024-06-09 20:48:36.533571] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:08.474 app_ut [options] 00:06:08.474 options: 00:06:08.474 -c, --config JSON config file (default none) 00:06:08.474 --json JSON config file (default none) 00:06:08.474 --json-ignore-init-errors 00:06:08.474 don't exit on invalid config entry 00:06:08.474 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:08.474 -g, --single-file-segments 00:06:08.474 force creating just one hugetlbfs file 00:06:08.474 -h, --help show this usage 00:06:08.474 -i, --shm-id shared memory ID (optional) 00:06:08.474 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:08.474 --lcores lcore to CPU mapping list. The list is in the format: 00:06:08.474 [<,lcores[@CPUs]>...] 00:06:08.474 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:08.474 Within the group, '-' is used for range separator, 00:06:08.474 ',' is used for single number separator. 00:06:08.474 '( )' can be omitted for single element group, 00:06:08.474 '@' can be omitted if cpus and lcores have the same value 00:06:08.474 -n, --mem-channels channel number of memory channels used for DPDK 00:06:08.474 -p, --main-core main (primary) core for DPDK 00:06:08.474 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:08.474 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:08.474 --disable-cpumask-locks Disable CPU core lock files. 00:06:08.474 --silence-noticelog disable notice level logging to stderr 00:06:08.474 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:08.474 -u, --no-pci disable PCI access 00:06:08.474 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:08.474 --max-delay maximum reactor delay (in microseconds) 00:06:08.474 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:08.474 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:08.474 -R, --huge-unlink unlink huge files after initialization 00:06:08.474 -v, --version print SPDK version 00:06:08.474 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:08.474 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:08.474 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:08.474 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:08.474 Tracepoints vary in size and can use more than one trace entry. 00:06:08.474 --rpcs-allowed comma-separated list of permitted RPCS 00:06:08.474 --env-context Opaque context for use of the env implementation 00:06:08.474 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:08.474 --no-huge run without using hugepages 00:06:08.474 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:08.474 -e, --tpoint-group [:] 00:06:08.474 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:08.474 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:08.474 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:08.474 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:08.474 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:08.474 passed 00:06:08.474 00:06:08.474 [2024-06-09 20:48:36.534036] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:08.474 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.474 suites 1 1 n/a 0 0 00:06:08.474 tests 1 1 1 0 0 00:06:08.475 asserts 8 8 8 0 n/a 00:06:08.475 00:06:08.475 Elapsed time = 0.001 seconds 00:06:08.475 20:48:36 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:08.475 00:06:08.475 00:06:08.475 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.475 http://cunit.sourceforge.net/ 00:06:08.475 00:06:08.475 00:06:08.475 Suite: app_suite 00:06:08.475 Test: test_create_reactor ...passed 00:06:08.475 Test: test_init_reactors ...passed 00:06:08.475 Test: test_event_call ...passed 00:06:08.475 Test: test_schedule_thread ...passed 00:06:08.475 Test: test_reschedule_thread ...passed 00:06:08.475 Test: test_bind_thread ...passed 00:06:08.475 Test: test_for_each_reactor ...passed 00:06:08.475 Test: test_reactor_stats ...passed 00:06:08.475 Test: test_scheduler ...passed 00:06:08.475 Test: test_governor ...passed 00:06:08.475 00:06:08.475 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.475 suites 1 1 n/a 0 0 00:06:08.475 tests 10 10 10 0 0 00:06:08.475 asserts 344 344 344 0 n/a 00:06:08.475 00:06:08.475 Elapsed time = 0.020 seconds 00:06:08.475 00:06:08.475 real 0m0.085s 00:06:08.475 user 0m0.057s 00:06:08.475 sys 0m0.028s 00:06:08.475 20:48:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.475 20:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.475 ************************************ 00:06:08.475 END TEST unittest_event 00:06:08.475 ************************************ 00:06:08.475 20:48:36 -- unit/unittest.sh@233 -- # uname -s 00:06:08.475 20:48:36 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:06:08.475 20:48:36 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:06:08.475 20:48:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.475 20:48:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.475 20:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.734 ************************************ 00:06:08.734 START TEST unittest_ftl 00:06:08.734 ************************************ 00:06:08.734 20:48:36 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:06:08.734 20:48:36 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:08.734 00:06:08.734 00:06:08.734 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.734 http://cunit.sourceforge.net/ 00:06:08.734 00:06:08.734 00:06:08.734 Suite: ftl_band_suite 00:06:08.734 Test: test_band_block_offset_from_addr_base ...passed 00:06:08.734 Test: test_band_block_offset_from_addr_offset ...passed 00:06:08.734 Test: test_band_addr_from_block_offset ...passed 00:06:08.734 Test: test_band_set_addr ...passed 00:06:08.734 Test: test_invalidate_addr ...passed 00:06:08.734 Test: test_next_xfer_addr ...passed 00:06:08.734 00:06:08.734 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.734 suites 1 1 n/a 0 0 00:06:08.734 tests 6 6 6 0 0 00:06:08.734 asserts 30356 30356 30356 0 n/a 00:06:08.734 00:06:08.734 Elapsed time = 0.196 seconds 00:06:08.993 20:48:36 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:08.993 00:06:08.993 00:06:08.993 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.993 http://cunit.sourceforge.net/ 00:06:08.993 00:06:08.993 00:06:08.993 Suite: ftl_bitmap 00:06:08.993 Test: test_ftl_bitmap_create ...[2024-06-09 20:48:36.945884] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:08.993 [2024-06-09 20:48:36.946344] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:08.993 passed 00:06:08.993 Test: test_ftl_bitmap_get ...passed 00:06:08.993 Test: test_ftl_bitmap_set ...passed 00:06:08.993 Test: test_ftl_bitmap_clear ...passed 00:06:08.993 Test: test_ftl_bitmap_find_first_set ...passed 00:06:08.993 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:08.993 Test: test_ftl_bitmap_count_set ...passed 00:06:08.993 00:06:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.993 suites 1 1 n/a 0 0 00:06:08.993 tests 7 7 7 0 0 00:06:08.993 asserts 137 137 137 0 n/a 00:06:08.993 00:06:08.993 Elapsed time = 0.001 seconds 00:06:08.993 20:48:36 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:08.993 00:06:08.993 00:06:08.993 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.993 http://cunit.sourceforge.net/ 00:06:08.993 00:06:08.993 00:06:08.993 Suite: ftl_io_suite 00:06:08.993 Test: test_completion ...passed 00:06:08.993 Test: test_multiple_ios ...passed 00:06:08.993 00:06:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.993 suites 1 1 n/a 0 0 00:06:08.993 tests 2 2 2 0 0 00:06:08.993 asserts 47 47 47 0 n/a 00:06:08.993 00:06:08.993 Elapsed time = 0.004 seconds 00:06:08.993 20:48:36 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:08.993 00:06:08.993 00:06:08.993 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.993 http://cunit.sourceforge.net/ 00:06:08.993 00:06:08.993 00:06:08.993 Suite: ftl_mngt 00:06:08.993 Test: test_next_step ...passed 00:06:08.993 Test: test_continue_step ...passed 00:06:08.993 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:08.993 Test: test_fail_step ...passed 00:06:08.993 Test: test_mngt_call_and_call_rollback ...passed 00:06:08.993 Test: test_nested_process_failure ...passed 00:06:08.993 00:06:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.993 suites 1 1 n/a 0 0 00:06:08.993 tests 6 6 6 0 0 00:06:08.993 asserts 176 176 176 0 n/a 00:06:08.993 00:06:08.993 Elapsed time = 0.001 seconds 00:06:08.993 20:48:37 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:08.993 00:06:08.993 00:06:08.993 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.993 http://cunit.sourceforge.net/ 00:06:08.993 00:06:08.993 00:06:08.993 Suite: ftl_mempool 00:06:08.993 Test: test_ftl_mempool_create ...passed 00:06:08.993 Test: test_ftl_mempool_get_put ...passed 00:06:08.993 00:06:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.993 suites 1 1 n/a 0 0 00:06:08.993 tests 2 2 2 0 0 00:06:08.993 asserts 36 36 36 0 n/a 00:06:08.993 00:06:08.993 Elapsed time = 0.000 seconds 00:06:08.993 20:48:37 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:08.993 00:06:08.993 00:06:08.993 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.993 http://cunit.sourceforge.net/ 00:06:08.993 00:06:08.993 00:06:08.993 Suite: ftl_addr64_suite 00:06:08.993 Test: test_addr_cached ...passed 00:06:08.993 00:06:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.993 suites 1 1 n/a 0 0 00:06:08.993 tests 1 1 1 0 0 00:06:08.993 asserts 1536 1536 1536 0 n/a 00:06:08.993 00:06:08.993 Elapsed time = 0.000 seconds 00:06:08.993 20:48:37 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:08.993 00:06:08.993 00:06:08.993 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.993 http://cunit.sourceforge.net/ 00:06:08.993 00:06:08.993 00:06:08.993 Suite: ftl_sb 00:06:08.993 Test: test_sb_crc_v2 ...passed 00:06:08.993 Test: test_sb_crc_v3 ...passed 00:06:08.993 Test: test_sb_v3_md_layout ...[2024-06-09 20:48:37.070102] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:08.993 [2024-06-09 20:48:37.070505] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:08.993 [2024-06-09 20:48:37.070581] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:08.993 [2024-06-09 20:48:37.070635] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:08.993 [2024-06-09 20:48:37.070678] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:08.993 [2024-06-09 20:48:37.070778] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:08.993 [2024-06-09 20:48:37.070823] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:08.993 [2024-06-09 20:48:37.070887] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:08.993 [2024-06-09 20:48:37.070998] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:08.993 [2024-06-09 20:48:37.071054] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:08.993 passed 00:06:08.993 Test: test_sb_v5_md_layout ...[2024-06-09 20:48:37.071104] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:08.993 passed 00:06:08.993 00:06:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.993 suites 1 1 n/a 0 0 00:06:08.993 tests 4 4 4 0 0 00:06:08.993 asserts 148 148 148 0 n/a 00:06:08.993 00:06:08.993 Elapsed time = 0.003 seconds 00:06:08.993 20:48:37 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:08.993 00:06:08.993 00:06:08.993 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.993 http://cunit.sourceforge.net/ 00:06:08.993 00:06:08.993 00:06:08.993 Suite: ftl_layout_upgrade 00:06:08.993 Test: test_l2p_upgrade ...passed 00:06:08.993 00:06:08.993 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.993 suites 1 1 n/a 0 0 00:06:08.993 tests 1 1 1 0 0 00:06:08.993 asserts 140 140 140 0 n/a 00:06:08.993 00:06:08.993 Elapsed time = 0.001 seconds 00:06:08.993 00:06:08.993 real 0m0.449s 00:06:08.993 user 0m0.174s 00:06:08.993 sys 0m0.276s 00:06:08.993 20:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.993 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:08.993 ************************************ 00:06:08.993 END TEST unittest_ftl 00:06:08.993 ************************************ 00:06:08.993 20:48:37 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:08.993 20:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.993 20:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.993 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:08.993 ************************************ 00:06:08.993 START TEST unittest_accel 00:06:08.993 ************************************ 00:06:08.993 20:48:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:09.269 00:06:09.269 00:06:09.269 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.269 http://cunit.sourceforge.net/ 00:06:09.269 00:06:09.269 00:06:09.269 Suite: accel_sequence 00:06:09.269 Test: test_sequence_fill_copy ...passed 00:06:09.269 Test: test_sequence_abort ...passed 00:06:09.269 Test: test_sequence_append_error ...passed 00:06:09.269 Test: test_sequence_completion_error ...[2024-06-09 20:48:37.180008] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fb160f287c0 00:06:09.269 [2024-06-09 20:48:37.180367] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fb160f287c0 00:06:09.269 [2024-06-09 20:48:37.180467] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fb160f287c0 00:06:09.269 [2024-06-09 20:48:37.180541] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fb160f287c0 00:06:09.269 passed 00:06:09.269 Test: test_sequence_decompress ...passed 00:06:09.269 Test: test_sequence_reverse ...passed 00:06:09.269 Test: test_sequence_copy_elision ...passed 00:06:09.269 Test: test_sequence_accel_buffers ...passed 00:06:09.269 Test: test_sequence_memory_domain ...[2024-06-09 20:48:37.192986] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:09.269 [2024-06-09 20:48:37.193212] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:09.269 passed 00:06:09.269 Test: test_sequence_module_memory_domain ...passed 00:06:09.269 Test: test_sequence_crypto ...passed 00:06:09.269 Test: test_sequence_driver ...[2024-06-09 20:48:37.200517] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fb1603007c0 using driver: ut 00:06:09.269 [2024-06-09 20:48:37.200671] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fb1603007c0 through driver: ut 00:06:09.269 passed 00:06:09.269 Test: test_sequence_same_iovs ...passed 00:06:09.269 Test: test_sequence_crc32 ...passed 00:06:09.269 Suite: accel 00:06:09.269 Test: test_spdk_accel_task_complete ...passed 00:06:09.269 Test: test_get_task ...passed 00:06:09.269 Test: test_spdk_accel_submit_copy ...passed 00:06:09.269 Test: test_spdk_accel_submit_dualcast ...[2024-06-09 20:48:37.206189] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:09.269 [2024-06-09 20:48:37.206274] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:09.269 passed 00:06:09.269 Test: test_spdk_accel_submit_compare ...passed 00:06:09.269 Test: test_spdk_accel_submit_fill ...passed 00:06:09.269 Test: test_spdk_accel_submit_crc32c ...passed 00:06:09.269 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:09.269 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:09.269 Test: test_spdk_accel_submit_xor ...passed 00:06:09.269 Test: test_spdk_accel_module_find_by_name ...passed 00:06:09.269 Test: test_spdk_accel_module_register ...passed 00:06:09.269 00:06:09.269 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.269 suites 2 2 n/a 0 0 00:06:09.269 tests 26 26 26 0 0 00:06:09.269 asserts 831 831 831 0 n/a 00:06:09.269 00:06:09.269 Elapsed time = 0.038 seconds 00:06:09.269 00:06:09.269 real 0m0.082s 00:06:09.269 user 0m0.041s 00:06:09.269 sys 0m0.042s 00:06:09.269 20:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.269 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.269 ************************************ 00:06:09.269 END TEST unittest_accel 00:06:09.269 ************************************ 00:06:09.269 20:48:37 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:09.269 20:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.269 20:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.269 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.269 ************************************ 00:06:09.269 START TEST unittest_ioat 00:06:09.269 ************************************ 00:06:09.269 20:48:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:09.269 00:06:09.269 00:06:09.269 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.269 http://cunit.sourceforge.net/ 00:06:09.269 00:06:09.269 00:06:09.269 Suite: ioat 00:06:09.269 Test: ioat_state_check ...passed 00:06:09.269 00:06:09.269 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.269 suites 1 1 n/a 0 0 00:06:09.269 tests 1 1 1 0 0 00:06:09.269 asserts 32 32 32 0 n/a 00:06:09.269 00:06:09.269 Elapsed time = 0.000 seconds 00:06:09.269 00:06:09.269 real 0m0.025s 00:06:09.269 user 0m0.009s 00:06:09.269 sys 0m0.017s 00:06:09.269 20:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.269 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.269 ************************************ 00:06:09.269 END TEST unittest_ioat 00:06:09.269 ************************************ 00:06:09.269 20:48:37 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:09.269 20:48:37 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:09.269 20:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.269 20:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.269 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.269 ************************************ 00:06:09.269 START TEST unittest_idxd_user 00:06:09.269 ************************************ 00:06:09.269 20:48:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:09.269 00:06:09.269 00:06:09.269 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.269 http://cunit.sourceforge.net/ 00:06:09.269 00:06:09.269 00:06:09.269 Suite: idxd_user 00:06:09.269 Test: test_idxd_wait_cmd ...[2024-06-09 20:48:37.379203] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:09.269 [2024-06-09 20:48:37.379498] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:09.269 passed 00:06:09.269 Test: test_idxd_reset_dev ...passed 00:06:09.269 Test: test_idxd_group_config ...passed 00:06:09.269 Test: test_idxd_wq_config ...passed 00:06:09.269 00:06:09.269 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.269 suites 1 1 n/a 0 0 00:06:09.269 tests 4 4 4 0 0 00:06:09.269 asserts 20 20 20 0 n/a 00:06:09.269 00:06:09.269 Elapsed time = 0.001 seconds 00:06:09.269 [2024-06-09 20:48:37.379625] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:09.269 [2024-06-09 20:48:37.379673] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:09.269 00:06:09.269 real 0m0.032s 00:06:09.269 user 0m0.016s 00:06:09.269 sys 0m0.017s 00:06:09.269 20:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.269 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.269 ************************************ 00:06:09.269 END TEST unittest_idxd_user 00:06:09.269 ************************************ 00:06:09.536 20:48:37 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:06:09.536 20:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.536 20:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.536 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.536 ************************************ 00:06:09.536 START TEST unittest_iscsi 00:06:09.536 ************************************ 00:06:09.536 20:48:37 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:06:09.536 20:48:37 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:09.536 00:06:09.536 00:06:09.536 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.536 http://cunit.sourceforge.net/ 00:06:09.536 00:06:09.536 00:06:09.536 Suite: conn_suite 00:06:09.536 Test: read_task_split_in_order_case ...passed 00:06:09.536 Test: read_task_split_reverse_order_case ...passed 00:06:09.536 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:09.536 Test: process_non_read_task_completion_test ...passed 00:06:09.536 Test: free_tasks_on_connection ...passed 00:06:09.536 Test: free_tasks_with_queued_datain ...passed 00:06:09.536 Test: abort_queued_datain_task_test ...passed 00:06:09.536 Test: abort_queued_datain_tasks_test ...passed 00:06:09.536 00:06:09.536 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.536 suites 1 1 n/a 0 0 00:06:09.536 tests 8 8 8 0 0 00:06:09.536 asserts 230 230 230 0 n/a 00:06:09.536 00:06:09.536 Elapsed time = 0.000 seconds 00:06:09.536 20:48:37 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:09.536 00:06:09.536 00:06:09.536 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.536 http://cunit.sourceforge.net/ 00:06:09.536 00:06:09.536 00:06:09.536 Suite: iscsi_suite 00:06:09.536 Test: param_negotiation_test ...passed 00:06:09.536 Test: list_negotiation_test ...passed 00:06:09.536 Test: parse_valid_test ...passed 00:06:09.536 Test: parse_invalid_test ...[2024-06-09 20:48:37.499043] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:09.536 [2024-06-09 20:48:37.499400] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:09.536 [2024-06-09 20:48:37.499458] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:09.537 [2024-06-09 20:48:37.499541] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:09.537 [2024-06-09 20:48:37.499701] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:09.537 [2024-06-09 20:48:37.499779] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:09.537 passed 00:06:09.537 00:06:09.537 [2024-06-09 20:48:37.499919] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.537 suites 1 1 n/a 0 0 00:06:09.537 tests 4 4 4 0 0 00:06:09.537 asserts 161 161 161 0 n/a 00:06:09.537 00:06:09.537 Elapsed time = 0.005 seconds 00:06:09.537 20:48:37 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:09.537 00:06:09.537 00:06:09.537 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.537 http://cunit.sourceforge.net/ 00:06:09.537 00:06:09.537 00:06:09.537 Suite: iscsi_target_node_suite 00:06:09.537 Test: add_lun_test_cases ...[2024-06-09 20:48:37.528971] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:09.537 [2024-06-09 20:48:37.529332] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:09.537 [2024-06-09 20:48:37.529457] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:09.537 passed 00:06:09.537 Test: allow_any_allowed ...passed 00:06:09.537 Test: allow_ipv6_allowed ...passed 00:06:09.537 Test: allow_ipv6_denied ...passed 00:06:09.537 Test: allow_ipv6_invalid ...passed 00:06:09.537 Test: allow_ipv4_allowed ...passed 00:06:09.537 Test: allow_ipv4_denied ...passed 00:06:09.537 Test: allow_ipv4_invalid ...passed 00:06:09.537 Test: node_access_allowed ...[2024-06-09 20:48:37.529534] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:09.537 [2024-06-09 20:48:37.529590] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:09.537 passed 00:06:09.537 Test: node_access_denied_by_empty_netmask ...passed 00:06:09.537 Test: node_access_multi_initiator_groups_cases ...passed 00:06:09.537 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:09.537 Test: chap_param_test_cases ...[2024-06-09 20:48:37.530084] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:09.537 [2024-06-09 20:48:37.530139] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:09.537 passed 00:06:09.537 00:06:09.537 [2024-06-09 20:48:37.530207] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:09.537 [2024-06-09 20:48:37.530242] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:09.537 [2024-06-09 20:48:37.530291] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.537 suites 1 1 n/a 0 0 00:06:09.537 tests 13 13 13 0 0 00:06:09.537 asserts 50 50 50 0 n/a 00:06:09.537 00:06:09.537 Elapsed time = 0.001 seconds 00:06:09.537 20:48:37 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:09.537 00:06:09.537 00:06:09.537 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.537 http://cunit.sourceforge.net/ 00:06:09.537 00:06:09.537 00:06:09.537 Suite: iscsi_suite 00:06:09.537 Test: op_login_check_target_test ...[2024-06-09 20:48:37.570702] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:09.537 passed 00:06:09.537 Test: op_login_session_normal_test ...[2024-06-09 20:48:37.571213] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:09.537 [2024-06-09 20:48:37.571312] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:09.537 [2024-06-09 20:48:37.571385] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:09.537 [2024-06-09 20:48:37.571496] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:09.537 [2024-06-09 20:48:37.571659] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:09.537 [2024-06-09 20:48:37.571847] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:09.537 [2024-06-09 20:48:37.571985] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:09.537 passed 00:06:09.537 Test: maxburstlength_test ...[2024-06-09 20:48:37.572377] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:09.537 [2024-06-09 20:48:37.572487] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:09.537 passed 00:06:09.537 Test: underflow_for_read_transfer_test ...passed 00:06:09.537 Test: underflow_for_zero_read_transfer_test ...passed 00:06:09.537 Test: underflow_for_request_sense_test ...passed 00:06:09.537 Test: underflow_for_check_condition_test ...passed 00:06:09.537 Test: add_transfer_task_test ...passed 00:06:09.537 Test: get_transfer_task_test ...passed 00:06:09.537 Test: del_transfer_task_test ...passed 00:06:09.537 Test: clear_all_transfer_tasks_test ...passed 00:06:09.537 Test: build_iovs_test ...passed 00:06:09.537 Test: build_iovs_with_md_test ...passed 00:06:09.537 Test: pdu_hdr_op_login_test ...[2024-06-09 20:48:37.574460] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:09.537 [2024-06-09 20:48:37.574628] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:09.537 [2024-06-09 20:48:37.574764] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:09.537 passed 00:06:09.537 Test: pdu_hdr_op_text_test ...[2024-06-09 20:48:37.574933] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:09.537 [2024-06-09 20:48:37.575060] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:09.537 [2024-06-09 20:48:37.575130] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:09.537 passed 00:06:09.537 Test: pdu_hdr_op_logout_test ...[2024-06-09 20:48:37.575249] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:09.537 passed 00:06:09.537 Test: pdu_hdr_op_scsi_test ...[2024-06-09 20:48:37.575480] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:09.537 [2024-06-09 20:48:37.575559] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:09.537 [2024-06-09 20:48:37.575642] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:09.537 [2024-06-09 20:48:37.575783] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:09.537 [2024-06-09 20:48:37.575897] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:09.537 passed 00:06:09.537 Test: pdu_hdr_op_task_mgmt_test ...[2024-06-09 20:48:37.576108] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:09.537 [2024-06-09 20:48:37.576263] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:09.537 [2024-06-09 20:48:37.576387] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:09.537 passed 00:06:09.537 Test: pdu_hdr_op_nopout_test ...[2024-06-09 20:48:37.576689] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:09.537 [2024-06-09 20:48:37.576822] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:09.537 passed 00:06:09.537 Test: pdu_hdr_op_data_test ...[2024-06-09 20:48:37.576883] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:09.537 [2024-06-09 20:48:37.576953] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:09.537 [2024-06-09 20:48:37.577030] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:09.537 [2024-06-09 20:48:37.577153] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:09.537 [2024-06-09 20:48:37.577284] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:09.537 [2024-06-09 20:48:37.577401] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:09.537 [2024-06-09 20:48:37.577560] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:09.537 [2024-06-09 20:48:37.577707] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:09.537 [2024-06-09 20:48:37.577805] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:09.537 passed 00:06:09.537 Test: empty_text_with_cbit_test ...passed 00:06:09.537 Test: pdu_payload_read_test ...[2024-06-09 20:48:37.580459] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:09.537 passed 00:06:09.537 Test: data_out_pdu_sequence_test ...passed 00:06:09.537 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:09.537 00:06:09.537 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.537 suites 1 1 n/a 0 0 00:06:09.537 tests 24 24 24 0 0 00:06:09.537 asserts 150253 150253 150253 0 n/a 00:06:09.537 00:06:09.537 Elapsed time = 0.022 seconds 00:06:09.537 20:48:37 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:09.537 00:06:09.537 00:06:09.537 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.537 http://cunit.sourceforge.net/ 00:06:09.537 00:06:09.537 00:06:09.538 Suite: init_grp_suite 00:06:09.538 Test: create_initiator_group_success_case ...passed 00:06:09.538 Test: find_initiator_group_success_case ...passed 00:06:09.538 Test: register_initiator_group_twice_case ...passed 00:06:09.538 Test: add_initiator_name_success_case ...passed 00:06:09.538 Test: add_initiator_name_fail_case ...[2024-06-09 20:48:37.626949] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:09.538 passed 00:06:09.538 Test: delete_all_initiator_names_success_case ...passed 00:06:09.538 Test: add_netmask_success_case ...passed 00:06:09.538 Test: add_netmask_fail_case ...[2024-06-09 20:48:37.627417] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:09.538 passed 00:06:09.538 Test: delete_all_netmasks_success_case ...passed 00:06:09.538 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:09.538 Test: netmask_overwrite_all_to_any_case ...passed 00:06:09.538 Test: add_delete_initiator_names_case ...passed 00:06:09.538 Test: add_duplicated_initiator_names_case ...passed 00:06:09.538 Test: delete_nonexisting_initiator_names_case ...passed 00:06:09.538 Test: add_delete_netmasks_case ...passed 00:06:09.538 Test: add_duplicated_netmasks_case ...passed 00:06:09.538 Test: delete_nonexisting_netmasks_case ...passed 00:06:09.538 00:06:09.538 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.538 suites 1 1 n/a 0 0 00:06:09.538 tests 17 17 17 0 0 00:06:09.538 asserts 108 108 108 0 n/a 00:06:09.538 00:06:09.538 Elapsed time = 0.001 seconds 00:06:09.538 20:48:37 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:09.538 00:06:09.538 00:06:09.538 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.538 http://cunit.sourceforge.net/ 00:06:09.538 00:06:09.538 00:06:09.538 Suite: portal_grp_suite 00:06:09.538 Test: portal_create_ipv4_normal_case ...passed 00:06:09.538 Test: portal_create_ipv6_normal_case ...passed 00:06:09.538 Test: portal_create_ipv4_wildcard_case ...passed 00:06:09.538 Test: portal_create_ipv6_wildcard_case ...passed 00:06:09.538 Test: portal_create_twice_case ...[2024-06-09 20:48:37.659552] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:09.538 passed 00:06:09.538 Test: portal_grp_register_unregister_case ...passed 00:06:09.538 Test: portal_grp_register_twice_case ...passed 00:06:09.538 Test: portal_grp_add_delete_case ...passed 00:06:09.538 Test: portal_grp_add_delete_twice_case ...passed 00:06:09.538 00:06:09.538 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.538 suites 1 1 n/a 0 0 00:06:09.538 tests 9 9 9 0 0 00:06:09.538 asserts 44 44 44 0 n/a 00:06:09.538 00:06:09.538 Elapsed time = 0.004 seconds 00:06:09.538 00:06:09.538 real 0m0.233s 00:06:09.538 user 0m0.125s 00:06:09.538 sys 0m0.111s 00:06:09.538 20:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.538 ************************************ 00:06:09.538 END TEST unittest_iscsi 00:06:09.538 ************************************ 00:06:09.538 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.797 20:48:37 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:06:09.797 20:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.797 20:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.797 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.797 ************************************ 00:06:09.797 START TEST unittest_json 00:06:09.797 ************************************ 00:06:09.797 20:48:37 -- common/autotest_common.sh@1104 -- # unittest_json 00:06:09.797 20:48:37 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:09.797 00:06:09.797 00:06:09.797 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.797 http://cunit.sourceforge.net/ 00:06:09.797 00:06:09.797 00:06:09.797 Suite: json 00:06:09.797 Test: test_parse_literal ...passed 00:06:09.797 Test: test_parse_string_simple ...passed 00:06:09.797 Test: test_parse_string_control_chars ...passed 00:06:09.797 Test: test_parse_string_utf8 ...passed 00:06:09.797 Test: test_parse_string_escapes_twochar ...passed 00:06:09.797 Test: test_parse_string_escapes_unicode ...passed 00:06:09.797 Test: test_parse_number ...passed 00:06:09.797 Test: test_parse_array ...passed 00:06:09.797 Test: test_parse_object ...passed 00:06:09.797 Test: test_parse_nesting ...passed 00:06:09.797 Test: test_parse_comment ...passed 00:06:09.797 00:06:09.797 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.797 suites 1 1 n/a 0 0 00:06:09.797 tests 11 11 11 0 0 00:06:09.797 asserts 1516 1516 1516 0 n/a 00:06:09.797 00:06:09.797 Elapsed time = 0.002 seconds 00:06:09.797 20:48:37 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:09.797 00:06:09.797 00:06:09.797 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.797 http://cunit.sourceforge.net/ 00:06:09.797 00:06:09.797 00:06:09.797 Suite: json 00:06:09.797 Test: test_strequal ...passed 00:06:09.797 Test: test_num_to_uint16 ...passed 00:06:09.797 Test: test_num_to_int32 ...passed 00:06:09.797 Test: test_num_to_uint64 ...passed 00:06:09.797 Test: test_decode_object ...passed 00:06:09.797 Test: test_decode_array ...passed 00:06:09.797 Test: test_decode_bool ...passed 00:06:09.797 Test: test_decode_uint16 ...passed 00:06:09.797 Test: test_decode_int32 ...passed 00:06:09.797 Test: test_decode_uint32 ...passed 00:06:09.797 Test: test_decode_uint64 ...passed 00:06:09.797 Test: test_decode_string ...passed 00:06:09.797 Test: test_decode_uuid ...passed 00:06:09.797 Test: test_find ...passed 00:06:09.797 Test: test_find_array ...passed 00:06:09.797 Test: test_iterating ...passed 00:06:09.797 Test: test_free_object ...passed 00:06:09.797 00:06:09.797 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.798 suites 1 1 n/a 0 0 00:06:09.798 tests 17 17 17 0 0 00:06:09.798 asserts 236 236 236 0 n/a 00:06:09.798 00:06:09.798 Elapsed time = 0.001 seconds 00:06:09.798 20:48:37 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:09.798 00:06:09.798 00:06:09.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.798 http://cunit.sourceforge.net/ 00:06:09.798 00:06:09.798 00:06:09.798 Suite: json 00:06:09.798 Test: test_write_literal ...passed 00:06:09.798 Test: test_write_string_simple ...passed 00:06:09.798 Test: test_write_string_escapes ...passed 00:06:09.798 Test: test_write_string_utf16le ...passed 00:06:09.798 Test: test_write_number_int32 ...passed 00:06:09.798 Test: test_write_number_uint32 ...passed 00:06:09.798 Test: test_write_number_uint128 ...passed 00:06:09.798 Test: test_write_string_number_uint128 ...passed 00:06:09.798 Test: test_write_number_int64 ...passed 00:06:09.798 Test: test_write_number_uint64 ...passed 00:06:09.798 Test: test_write_number_double ...passed 00:06:09.798 Test: test_write_uuid ...passed 00:06:09.798 Test: test_write_array ...passed 00:06:09.798 Test: test_write_object ...passed 00:06:09.798 Test: test_write_nesting ...passed 00:06:09.798 Test: test_write_val ...passed 00:06:09.798 00:06:09.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.798 suites 1 1 n/a 0 0 00:06:09.798 tests 16 16 16 0 0 00:06:09.798 asserts 918 918 918 0 n/a 00:06:09.798 00:06:09.798 Elapsed time = 0.004 seconds 00:06:09.798 20:48:37 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:09.798 00:06:09.798 00:06:09.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.798 http://cunit.sourceforge.net/ 00:06:09.798 00:06:09.798 00:06:09.798 Suite: jsonrpc 00:06:09.798 Test: test_parse_request ...passed 00:06:09.798 Test: test_parse_request_streaming ...passed 00:06:09.798 00:06:09.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.798 suites 1 1 n/a 0 0 00:06:09.798 tests 2 2 2 0 0 00:06:09.798 asserts 289 289 289 0 n/a 00:06:09.798 00:06:09.798 Elapsed time = 0.004 seconds 00:06:09.798 00:06:09.798 real 0m0.124s 00:06:09.798 user 0m0.080s 00:06:09.798 sys 0m0.045s 00:06:09.798 20:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.798 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.798 ************************************ 00:06:09.798 END TEST unittest_json 00:06:09.798 ************************************ 00:06:09.798 20:48:37 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:06:09.798 20:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.798 20:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.798 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.798 ************************************ 00:06:09.798 START TEST unittest_rpc 00:06:09.798 ************************************ 00:06:09.798 20:48:37 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:06:09.798 20:48:37 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:09.798 00:06:09.798 00:06:09.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.798 http://cunit.sourceforge.net/ 00:06:09.798 00:06:09.798 00:06:09.798 Suite: rpc 00:06:09.798 Test: test_jsonrpc_handler ...passed 00:06:09.798 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:09.798 Test: test_rpc_get_methods ...[2024-06-09 20:48:37.922853] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:09.798 passed 00:06:09.798 Test: test_rpc_spdk_get_version ...passed 00:06:09.798 Test: test_spdk_rpc_listen_close ...passed 00:06:09.798 00:06:09.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.798 suites 1 1 n/a 0 0 00:06:09.798 tests 5 5 5 0 0 00:06:09.798 asserts 20 20 20 0 n/a 00:06:09.798 00:06:09.798 Elapsed time = 0.000 seconds 00:06:09.798 00:06:09.798 real 0m0.027s 00:06:09.798 user 0m0.014s 00:06:09.798 sys 0m0.014s 00:06:09.798 20:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.798 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.798 ************************************ 00:06:09.798 END TEST unittest_rpc 00:06:09.798 ************************************ 00:06:10.057 20:48:37 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:10.057 20:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.057 20:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.057 20:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:10.057 ************************************ 00:06:10.057 START TEST unittest_notify 00:06:10.057 ************************************ 00:06:10.057 20:48:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:10.057 00:06:10.057 00:06:10.057 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.057 http://cunit.sourceforge.net/ 00:06:10.057 00:06:10.057 00:06:10.057 Suite: app_suite 00:06:10.057 Test: notify ...passed 00:06:10.057 00:06:10.057 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.057 suites 1 1 n/a 0 0 00:06:10.057 tests 1 1 1 0 0 00:06:10.057 asserts 13 13 13 0 n/a 00:06:10.057 00:06:10.057 Elapsed time = 0.000 seconds 00:06:10.057 00:06:10.057 real 0m0.029s 00:06:10.057 user 0m0.022s 00:06:10.057 sys 0m0.007s 00:06:10.057 20:48:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.057 20:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:10.057 ************************************ 00:06:10.057 END TEST unittest_notify 00:06:10.057 ************************************ 00:06:10.058 20:48:38 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:06:10.058 20:48:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.058 20:48:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.058 20:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:10.058 ************************************ 00:06:10.058 START TEST unittest_nvme 00:06:10.058 ************************************ 00:06:10.058 20:48:38 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:06:10.058 20:48:38 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:10.058 00:06:10.058 00:06:10.058 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.058 http://cunit.sourceforge.net/ 00:06:10.058 00:06:10.058 00:06:10.058 Suite: nvme 00:06:10.058 Test: test_opc_data_transfer ...passed 00:06:10.058 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:10.058 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:10.058 Test: test_trid_parse_and_compare ...[2024-06-09 20:48:38.082838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:10.058 [2024-06-09 20:48:38.083247] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:10.058 [2024-06-09 20:48:38.083377] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:10.058 [2024-06-09 20:48:38.083441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:10.058 [2024-06-09 20:48:38.083494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:10.058 [2024-06-09 20:48:38.083611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:10.058 passed 00:06:10.058 Test: test_trid_trtype_str ...passed 00:06:10.058 Test: test_trid_adrfam_str ...passed 00:06:10.058 Test: test_nvme_ctrlr_probe ...[2024-06-09 20:48:38.083947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:10.058 passed 00:06:10.058 Test: test_spdk_nvme_probe ...[2024-06-09 20:48:38.084092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:10.058 [2024-06-09 20:48:38.084150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:10.058 [2024-06-09 20:48:38.084286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:10.058 passed 00:06:10.058 Test: test_spdk_nvme_connect ...[2024-06-09 20:48:38.084351] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:10.058 [2024-06-09 20:48:38.084478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:10.058 [2024-06-09 20:48:38.084991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:10.058 [2024-06-09 20:48:38.085092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:10.058 passed 00:06:10.058 Test: test_nvme_ctrlr_probe_internal ...[2024-06-09 20:48:38.085293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:10.058 [2024-06-09 20:48:38.085379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:10.058 passed 00:06:10.058 Test: test_nvme_init_controllers ...[2024-06-09 20:48:38.085501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:10.058 passed 00:06:10.058 Test: test_nvme_driver_init ...[2024-06-09 20:48:38.085780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:10.058 [2024-06-09 20:48:38.085862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:10.058 [2024-06-09 20:48:38.198081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:10.058 [2024-06-09 20:48:38.198226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:10.058 passed 00:06:10.058 Test: test_spdk_nvme_detach ...passed 00:06:10.058 Test: test_nvme_completion_poll_cb ...passed 00:06:10.058 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:10.058 Test: test_nvme_allocate_request_null ...passed 00:06:10.058 Test: test_nvme_allocate_request ...passed 00:06:10.058 Test: test_nvme_free_request ...passed 00:06:10.058 Test: test_nvme_allocate_request_user_copy ...passed 00:06:10.058 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:10.058 Test: test_nvme_request_check_timeout ...passed 00:06:10.058 Test: test_nvme_wait_for_completion ...passed 00:06:10.058 Test: test_spdk_nvme_parse_func ...passed 00:06:10.058 Test: test_spdk_nvme_detach_async ...passed 00:06:10.058 Test: test_nvme_parse_addr ...[2024-06-09 20:48:38.198918] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:10.058 passed 00:06:10.058 00:06:10.058 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.058 suites 1 1 n/a 0 0 00:06:10.058 tests 25 25 25 0 0 00:06:10.058 asserts 326 326 326 0 n/a 00:06:10.058 00:06:10.058 Elapsed time = 0.007 seconds 00:06:10.058 20:48:38 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:10.318 00:06:10.318 00:06:10.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.318 http://cunit.sourceforge.net/ 00:06:10.318 00:06:10.318 00:06:10.318 Suite: nvme_ctrlr 00:06:10.318 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-06-09 20:48:38.236073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 passed 00:06:10.318 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-06-09 20:48:38.237826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 passed 00:06:10.318 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-06-09 20:48:38.239158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 passed 00:06:10.318 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-06-09 20:48:38.240532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 passed 00:06:10.318 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-06-09 20:48:38.241905] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 [2024-06-09 20:48:38.243121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-09 20:48:38.244353] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-09 20:48:38.245557] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:10.318 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-06-09 20:48:38.248066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 [2024-06-09 20:48:38.250468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-09 20:48:38.251744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:10.318 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-06-09 20:48:38.254312] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 [2024-06-09 20:48:38.255579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-09 20:48:38.257975] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:10.318 Test: test_nvme_ctrlr_init_delay ...[2024-06-09 20:48:38.260573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 passed 00:06:10.318 Test: test_alloc_io_qpair_rr_1 ...[2024-06-09 20:48:38.261973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 [2024-06-09 20:48:38.262236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:10.318 [2024-06-09 20:48:38.262439] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:10.318 passed 00:06:10.318 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:06:10.318 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:10.318 Test: test_alloc_io_qpair_wrr_1 ...[2024-06-09 20:48:38.262524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:10.318 [2024-06-09 20:48:38.262607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:10.318 [2024-06-09 20:48:38.262753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 passed 00:06:10.318 Test: test_alloc_io_qpair_wrr_2 ...[2024-06-09 20:48:38.262954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.318 [2024-06-09 20:48:38.263097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:10.318 passed 00:06:10.318 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-06-09 20:48:38.263394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:10.318 [2024-06-09 20:48:38.263581] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:10.318 passed 00:06:10.318 Test: test_nvme_ctrlr_fail ...[2024-06-09 20:48:38.263674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:10.318 [2024-06-09 20:48:38.263768] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:10.318 [2024-06-09 20:48:38.263835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:10.318 passed 00:06:10.318 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:06:10.318 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:10.318 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:10.318 Test: test_nvme_ctrlr_test_active_ns ...[2024-06-09 20:48:38.264144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:10.578 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:10.578 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:10.578 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-06-09 20:48:38.575866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-06-09 20:48:38.582997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-06-09 20:48:38.584253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 [2024-06-09 20:48:38.584331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:10.578 passed 00:06:10.578 Test: test_alloc_io_qpair_fail ...[2024-06-09 20:48:38.585504] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:10.578 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:06:10.578 Test: test_nvme_ctrlr_set_state ...passed 00:06:10.578 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-06-09 20:48:38.585643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:10.578 [2024-06-09 20:48:38.585800] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:10.578 [2024-06-09 20:48:38.585842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-06-09 20:48:38.600728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_ns_mgmt ...[2024-06-09 20:48:38.633588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_reset ...[2024-06-09 20:48:38.635180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_aer_callback ...[2024-06-09 20:48:38.635560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-06-09 20:48:38.636998] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:10.578 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:10.578 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-06-09 20:48:38.638742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:10.578 Test: test_nvme_ctrlr_ana_resize ...[2024-06-09 20:48:38.640159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:10.578 Test: test_nvme_transport_ctrlr_ready ...[2024-06-09 20:48:38.641758] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:10.578 [2024-06-09 20:48:38.641815] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:10.578 passed 00:06:10.578 Test: test_nvme_ctrlr_disable ...[2024-06-09 20:48:38.641879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:10.578 passed 00:06:10.578 00:06:10.578 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.578 suites 1 1 n/a 0 0 00:06:10.578 tests 43 43 43 0 0 00:06:10.578 asserts 10418 10418 10418 0 n/a 00:06:10.578 00:06:10.578 Elapsed time = 0.366 seconds 00:06:10.578 20:48:38 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:10.578 00:06:10.578 00:06:10.578 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.578 http://cunit.sourceforge.net/ 00:06:10.578 00:06:10.578 00:06:10.578 Suite: nvme_ctrlr_cmd 00:06:10.578 Test: test_get_log_pages ...passed 00:06:10.578 Test: test_set_feature_cmd ...passed 00:06:10.578 Test: test_set_feature_ns_cmd ...passed 00:06:10.578 Test: test_get_feature_cmd ...passed 00:06:10.578 Test: test_get_feature_ns_cmd ...passed 00:06:10.578 Test: test_abort_cmd ...passed 00:06:10.578 Test: test_set_host_id_cmds ...[2024-06-09 20:48:38.685343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:10.578 passed 00:06:10.578 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:10.578 Test: test_io_raw_cmd ...passed 00:06:10.578 Test: test_io_raw_cmd_with_md ...passed 00:06:10.578 Test: test_namespace_attach ...passed 00:06:10.578 Test: test_namespace_detach ...passed 00:06:10.578 Test: test_namespace_create ...passed 00:06:10.578 Test: test_namespace_delete ...passed 00:06:10.578 Test: test_doorbell_buffer_config ...passed 00:06:10.578 Test: test_format_nvme ...passed 00:06:10.578 Test: test_fw_commit ...passed 00:06:10.578 Test: test_fw_image_download ...passed 00:06:10.578 Test: test_sanitize ...passed 00:06:10.578 Test: test_directive ...passed 00:06:10.578 Test: test_nvme_request_add_abort ...passed 00:06:10.578 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:10.578 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:10.578 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:10.578 00:06:10.578 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.578 suites 1 1 n/a 0 0 00:06:10.578 tests 24 24 24 0 0 00:06:10.578 asserts 198 198 198 0 n/a 00:06:10.578 00:06:10.578 Elapsed time = 0.001 seconds 00:06:10.578 20:48:38 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:10.578 00:06:10.578 00:06:10.578 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.578 http://cunit.sourceforge.net/ 00:06:10.578 00:06:10.578 00:06:10.578 Suite: nvme_ctrlr_cmd 00:06:10.578 Test: test_geometry_cmd ...passed 00:06:10.578 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:10.578 00:06:10.578 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.578 suites 1 1 n/a 0 0 00:06:10.578 tests 2 2 2 0 0 00:06:10.578 asserts 7 7 7 0 n/a 00:06:10.578 00:06:10.578 Elapsed time = 0.000 seconds 00:06:10.578 20:48:38 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:10.578 00:06:10.578 00:06:10.578 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.578 http://cunit.sourceforge.net/ 00:06:10.578 00:06:10.578 00:06:10.578 Suite: nvme 00:06:10.578 Test: test_nvme_ns_construct ...passed 00:06:10.578 Test: test_nvme_ns_uuid ...passed 00:06:10.578 Test: test_nvme_ns_csi ...passed 00:06:10.578 Test: test_nvme_ns_data ...passed 00:06:10.578 Test: test_nvme_ns_set_identify_data ...passed 00:06:10.578 Test: test_spdk_nvme_ns_get_values ...passed 00:06:10.578 Test: test_spdk_nvme_ns_is_active ...passed 00:06:10.578 Test: spdk_nvme_ns_supports ...passed 00:06:10.578 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:10.578 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:10.578 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:10.578 Test: test_nvme_ns_find_id_desc ...passed 00:06:10.578 00:06:10.578 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.578 suites 1 1 n/a 0 0 00:06:10.578 tests 12 12 12 0 0 00:06:10.578 asserts 83 83 83 0 n/a 00:06:10.578 00:06:10.578 Elapsed time = 0.001 seconds 00:06:10.578 20:48:38 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:10.838 00:06:10.838 00:06:10.838 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.838 http://cunit.sourceforge.net/ 00:06:10.838 00:06:10.838 00:06:10.838 Suite: nvme_ns_cmd 00:06:10.838 Test: split_test ...passed 00:06:10.838 Test: split_test2 ...passed 00:06:10.838 Test: split_test3 ...passed 00:06:10.838 Test: split_test4 ...passed 00:06:10.838 Test: test_nvme_ns_cmd_flush ...passed 00:06:10.838 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:10.838 Test: test_nvme_ns_cmd_copy ...passed 00:06:10.838 Test: test_io_flags ...[2024-06-09 20:48:38.755852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:10.838 passed 00:06:10.838 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:10.838 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:10.838 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:10.838 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:10.838 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:10.838 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:10.838 Test: test_cmd_child_request ...passed 00:06:10.838 Test: test_nvme_ns_cmd_readv ...passed 00:06:10.838 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:10.838 Test: test_nvme_ns_cmd_writev ...passed 00:06:10.838 Test: test_nvme_ns_cmd_write_with_md ...[2024-06-09 20:48:38.756959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:10.838 passed 00:06:10.838 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:10.838 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:10.838 Test: test_nvme_ns_cmd_comparev ...passed 00:06:10.838 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:10.838 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:10.838 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:10.838 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:10.838 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:10.838 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:06:10.838 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-06-09 20:48:38.758756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:10.838 [2024-06-09 20:48:38.758870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:10.838 passed 00:06:10.838 Test: test_nvme_ns_cmd_verify ...passed 00:06:10.838 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:10.838 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:10.838 00:06:10.838 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.838 suites 1 1 n/a 0 0 00:06:10.838 tests 32 32 32 0 0 00:06:10.838 asserts 550 550 550 0 n/a 00:06:10.838 00:06:10.838 Elapsed time = 0.004 seconds 00:06:10.838 20:48:38 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:10.838 00:06:10.838 00:06:10.838 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.838 http://cunit.sourceforge.net/ 00:06:10.838 00:06:10.838 00:06:10.838 Suite: nvme_ns_cmd 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:10.838 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:10.838 00:06:10.838 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.838 suites 1 1 n/a 0 0 00:06:10.838 tests 12 12 12 0 0 00:06:10.838 asserts 123 123 123 0 n/a 00:06:10.838 00:06:10.838 Elapsed time = 0.001 seconds 00:06:10.838 20:48:38 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:10.838 00:06:10.838 00:06:10.838 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.838 http://cunit.sourceforge.net/ 00:06:10.838 00:06:10.838 00:06:10.838 Suite: nvme_qpair 00:06:10.838 Test: test3 ...passed 00:06:10.838 Test: test_ctrlr_failed ...passed 00:06:10.838 Test: struct_packing ...passed 00:06:10.838 Test: test_nvme_qpair_process_completions ...[2024-06-09 20:48:38.811313] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:10.838 passed 00:06:10.838 Test: test_nvme_completion_is_retry ...passed 00:06:10.838 Test: test_get_status_string ...passed 00:06:10.838 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-06-09 20:48:38.811665] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:10.838 [2024-06-09 20:48:38.811747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:10.838 [2024-06-09 20:48:38.811847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:10.838 passed 00:06:10.838 Test: test_nvme_qpair_submit_request ...passed 00:06:10.838 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:10.838 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:10.838 Test: test_nvme_qpair_init_deinit ...[2024-06-09 20:48:38.812270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:10.838 passed 00:06:10.838 Test: test_nvme_get_sgl_print_info ...passed 00:06:10.838 00:06:10.838 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.839 suites 1 1 n/a 0 0 00:06:10.839 tests 12 12 12 0 0 00:06:10.839 asserts 154 154 154 0 n/a 00:06:10.839 00:06:10.839 Elapsed time = 0.001 seconds 00:06:10.839 20:48:38 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:10.839 00:06:10.839 00:06:10.839 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.839 http://cunit.sourceforge.net/ 00:06:10.839 00:06:10.839 00:06:10.839 Suite: nvme_pcie 00:06:10.839 Test: test_prp_list_append ...[2024-06-09 20:48:38.841365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:10.839 [2024-06-09 20:48:38.841648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:10.839 [2024-06-09 20:48:38.841693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:10.839 [2024-06-09 20:48:38.841914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:10.839 passed 00:06:10.839 Test: test_nvme_pcie_hotplug_monitor ...passed 00:06:10.839 Test: test_shadow_doorbell_update ...passed 00:06:10.839 Test: test_build_contig_hw_sgl_request ...passed 00:06:10.839 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:10.839 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:10.839 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:10.839 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:06:10.839 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:10.839 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:10.839 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:06:10.839 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:10.839 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-06-09 20:48:38.841990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:10.839 [2024-06-09 20:48:38.842208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:10.839 [2024-06-09 20:48:38.842289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:10.839 [2024-06-09 20:48:38.842360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:10.839 passed 00:06:10.839 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-06-09 20:48:38.842405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:10.839 passed 00:06:10.839 00:06:10.839 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.839 suites 1 1 n/a 0 0 00:06:10.839 tests 14 14 14 0 0 00:06:10.839 asserts 235 235 235 0 n/a 00:06:10.839 00:06:10.839 Elapsed time = 0.001 seconds 00:06:10.839 [2024-06-09 20:48:38.842446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:10.839 20:48:38 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:10.839 00:06:10.839 00:06:10.839 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.839 http://cunit.sourceforge.net/ 00:06:10.839 00:06:10.839 00:06:10.839 Suite: nvme_ns_cmd 00:06:10.839 Test: nvme_poll_group_create_test ...passed 00:06:10.839 Test: nvme_poll_group_add_remove_test ...passed 00:06:10.839 Test: nvme_poll_group_process_completions ...passed 00:06:10.839 Test: nvme_poll_group_destroy_test ...passed 00:06:10.839 Test: nvme_poll_group_get_free_stats ...passed 00:06:10.839 00:06:10.839 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.839 suites 1 1 n/a 0 0 00:06:10.839 tests 5 5 5 0 0 00:06:10.839 asserts 75 75 75 0 n/a 00:06:10.839 00:06:10.839 Elapsed time = 0.001 seconds 00:06:10.839 20:48:38 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:10.839 00:06:10.839 00:06:10.839 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.839 http://cunit.sourceforge.net/ 00:06:10.839 00:06:10.839 00:06:10.839 Suite: nvme_quirks 00:06:10.839 Test: test_nvme_quirks_striping ...passed 00:06:10.839 00:06:10.839 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.839 suites 1 1 n/a 0 0 00:06:10.839 tests 1 1 1 0 0 00:06:10.839 asserts 5 5 5 0 n/a 00:06:10.839 00:06:10.839 Elapsed time = 0.000 seconds 00:06:10.839 20:48:38 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:10.839 00:06:10.839 00:06:10.839 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.839 http://cunit.sourceforge.net/ 00:06:10.839 00:06:10.839 00:06:10.839 Suite: nvme_tcp 00:06:10.839 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:10.839 Test: test_nvme_tcp_build_iovs ...passed 00:06:10.839 Test: test_nvme_tcp_build_sgl_request ...[2024-06-09 20:48:38.918905] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffcb0f852a0, and the iovcnt=16, remaining_size=28672 00:06:10.839 passed 00:06:10.839 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:10.839 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:10.839 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:10.839 Test: test_nvme_tcp_req_get ...passed 00:06:10.839 Test: test_nvme_tcp_req_init ...passed 00:06:10.839 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:10.839 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:10.839 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:06:10.839 Test: test_nvme_tcp_alloc_reqs ...[2024-06-09 20:48:38.919645] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86fc0 is same with the state(6) to be set 00:06:10.839 passed 00:06:10.839 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:06:10.839 Test: test_nvme_tcp_pdu_ch_handle ...[2024-06-09 20:48:38.920062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86150 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.920149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffcb0f86c80 00:06:10.839 [2024-06-09 20:48:38.920211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:10.839 [2024-06-09 20:48:38.920331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.920411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:10.839 [2024-06-09 20:48:38.920520] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.920578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:10.839 [2024-06-09 20:48:38.920624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.920682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.920755] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.920834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 passed 00:06:10.839 Test: test_nvme_tcp_qpair_connect_sock ...[2024-06-09 20:48:38.920900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.920963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86610 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.921141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:10.839 [2024-06-09 20:48:38.921202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:10.839 passed 00:06:10.839 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:10.839 Test: test_nvme_tcp_c2h_payload_handle ...[2024-06-09 20:48:38.921494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:10.839 [2024-06-09 20:48:38.921681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcb0f867c0): PDU Sequence Error 00:06:10.839 passed 00:06:10.839 Test: test_nvme_tcp_icresp_handle ...[2024-06-09 20:48:38.921867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:10.839 [2024-06-09 20:48:38.921925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:10.839 [2024-06-09 20:48:38.921979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86160 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.922061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:10.839 passed 00:06:10.839 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:06:10.839 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-06-09 20:48:38.922114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86160 is same with the state(5) to be set 00:06:10.839 [2024-06-09 20:48:38.922216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f86160 is same with the state(0) to be set 00:06:10.839 [2024-06-09 20:48:38.922293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcb0f86c80): PDU Sequence Error 00:06:10.839 [2024-06-09 20:48:38.922378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffcb0f85440 00:06:10.839 passed 00:06:10.839 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:10.839 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-06-09 20:48:38.922532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffcb0f84ac0, errno=0, rc=0 00:06:10.839 [2024-06-09 20:48:38.922591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f84ac0 is same with the state(5) to be set 00:06:10.840 [2024-06-09 20:48:38.922662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb0f84ac0 is same with the state(5) to be set 00:06:10.840 passed 00:06:10.840 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-06-09 20:48:38.922717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcb0f84ac0 (0): Success 00:06:10.840 [2024-06-09 20:48:38.922767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcb0f84ac0 (0): Success 00:06:11.099 [2024-06-09 20:48:39.036831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:11.099 [2024-06-09 20:48:39.036964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:11.099 passed 00:06:11.099 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:06:11.099 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:06:11.099 Test: test_nvme_tcp_ctrlr_construct ...[2024-06-09 20:48:39.037190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:11.099 [2024-06-09 20:48:39.037253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:11.099 [2024-06-09 20:48:39.037459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:11.099 [2024-06-09 20:48:39.037528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:11.099 [2024-06-09 20:48:39.037673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:11.099 [2024-06-09 20:48:39.037773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:11.099 [2024-06-09 20:48:39.037885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:06:11.099 passed 00:06:11.099 Test: test_nvme_tcp_qpair_submit_request ...[2024-06-09 20:48:39.037972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:11.099 [2024-06-09 20:48:39.038120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:06:11.099 [2024-06-09 20:48:39.038171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:11.099 passed 00:06:11.099 00:06:11.099 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.099 suites 1 1 n/a 0 0 00:06:11.099 tests 27 27 27 0 0 00:06:11.099 asserts 624 624 624 0 n/a 00:06:11.099 00:06:11.099 Elapsed time = 0.119 seconds 00:06:11.099 20:48:39 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:11.099 00:06:11.099 00:06:11.099 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.099 http://cunit.sourceforge.net/ 00:06:11.099 00:06:11.099 00:06:11.099 Suite: nvme_transport 00:06:11.099 Test: test_nvme_get_transport ...passed 00:06:11.099 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:11.099 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:11.099 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:11.099 Test: test_ctrlr_get_memory_domains ...passed 00:06:11.099 00:06:11.099 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.099 suites 1 1 n/a 0 0 00:06:11.099 tests 5 5 5 0 0 00:06:11.099 asserts 28 28 28 0 n/a 00:06:11.099 00:06:11.099 Elapsed time = 0.000 seconds 00:06:11.099 20:48:39 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:11.099 00:06:11.099 00:06:11.099 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.099 http://cunit.sourceforge.net/ 00:06:11.099 00:06:11.099 00:06:11.099 Suite: nvme_io_msg 00:06:11.099 Test: test_nvme_io_msg_send ...passed 00:06:11.099 Test: test_nvme_io_msg_process ...passed 00:06:11.099 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:11.099 00:06:11.099 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.099 suites 1 1 n/a 0 0 00:06:11.099 tests 3 3 3 0 0 00:06:11.099 asserts 56 56 56 0 n/a 00:06:11.099 00:06:11.099 Elapsed time = 0.000 seconds 00:06:11.099 20:48:39 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:11.099 00:06:11.099 00:06:11.099 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.099 http://cunit.sourceforge.net/ 00:06:11.099 00:06:11.099 00:06:11.099 Suite: nvme_pcie_common 00:06:11.099 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:06:11.099 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-06-09 20:48:39.137022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:11.099 passed 00:06:11.099 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:11.099 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:06:11.099 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-06-09 20:48:39.138080] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:11.099 [2024-06-09 20:48:39.138261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:11.099 [2024-06-09 20:48:39.138324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:11.099 passed 00:06:11.100 Test: test_nvme_pcie_poll_group_get_stats ...[2024-06-09 20:48:39.138823] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:11.100 [2024-06-09 20:48:39.138895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:11.100 passed 00:06:11.100 00:06:11.100 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.100 suites 1 1 n/a 0 0 00:06:11.100 tests 6 6 6 0 0 00:06:11.100 asserts 148 148 148 0 n/a 00:06:11.100 00:06:11.100 Elapsed time = 0.002 seconds 00:06:11.100 20:48:39 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:11.100 00:06:11.100 00:06:11.100 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.100 http://cunit.sourceforge.net/ 00:06:11.100 00:06:11.100 00:06:11.100 Suite: nvme_fabric 00:06:11.100 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:11.100 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:11.100 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:11.100 Test: test_nvme_fabric_discover_probe ...passed 00:06:11.100 Test: test_nvme_fabric_qpair_connect ...passed 00:06:11.100 00:06:11.100 [2024-06-09 20:48:39.168628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:11.100 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.100 suites 1 1 n/a 0 0 00:06:11.100 tests 5 5 5 0 0 00:06:11.100 asserts 60 60 60 0 n/a 00:06:11.100 00:06:11.100 Elapsed time = 0.001 seconds 00:06:11.100 20:48:39 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:11.100 00:06:11.100 00:06:11.100 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.100 http://cunit.sourceforge.net/ 00:06:11.100 00:06:11.100 00:06:11.100 Suite: nvme_opal 00:06:11.100 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:11.100 Test: test_opal_add_short_atom_header ...passed 00:06:11.100 00:06:11.100 [2024-06-09 20:48:39.201422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:11.100 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.100 suites 1 1 n/a 0 0 00:06:11.100 tests 2 2 2 0 0 00:06:11.100 asserts 22 22 22 0 n/a 00:06:11.100 00:06:11.100 Elapsed time = 0.001 seconds 00:06:11.100 00:06:11.100 real 0m1.145s 00:06:11.100 user 0m0.630s 00:06:11.100 sys 0m0.370s 00:06:11.100 ************************************ 00:06:11.100 END TEST unittest_nvme 00:06:11.100 ************************************ 00:06:11.100 20:48:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.100 20:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:11.100 20:48:39 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:11.100 20:48:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.100 20:48:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.100 20:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:11.100 ************************************ 00:06:11.100 START TEST unittest_log 00:06:11.100 ************************************ 00:06:11.100 20:48:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:11.358 00:06:11.358 00:06:11.358 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.358 http://cunit.sourceforge.net/ 00:06:11.358 00:06:11.358 00:06:11.358 Suite: log 00:06:11.358 Test: log_test ...passed 00:06:11.358 Test: deprecation ...[2024-06-09 20:48:39.283466] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:11.358 [2024-06-09 20:48:39.283728] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:11.358 log dump test: 00:06:11.358 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:11.358 spdk dump test: 00:06:11.358 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:11.358 spdk dump test: 00:06:11.358 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:11.358 00000010 65 20 63 68 61 72 73 e chars 00:06:12.301 passed 00:06:12.301 00:06:12.301 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.301 suites 1 1 n/a 0 0 00:06:12.301 tests 2 2 2 0 0 00:06:12.301 asserts 73 73 73 0 n/a 00:06:12.301 00:06:12.301 Elapsed time = 0.001 seconds 00:06:12.301 00:06:12.301 real 0m1.032s 00:06:12.301 user 0m0.020s 00:06:12.301 sys 0m0.013s 00:06:12.301 20:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.301 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.301 ************************************ 00:06:12.301 END TEST unittest_log 00:06:12.301 ************************************ 00:06:12.301 20:48:40 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:12.301 20:48:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.301 20:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.301 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.301 ************************************ 00:06:12.301 START TEST unittest_lvol 00:06:12.301 ************************************ 00:06:12.301 20:48:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:12.301 00:06:12.301 00:06:12.301 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.301 http://cunit.sourceforge.net/ 00:06:12.301 00:06:12.301 00:06:12.301 Suite: lvol 00:06:12.301 Test: lvs_init_unload_success ...[2024-06-09 20:48:40.382550] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:12.301 passed 00:06:12.301 Test: lvs_init_destroy_success ...passed 00:06:12.301 Test: lvs_init_opts_success ...[2024-06-09 20:48:40.383129] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:12.301 passed 00:06:12.301 Test: lvs_unload_lvs_is_null_fail ...passed 00:06:12.301 Test: lvs_names ...[2024-06-09 20:48:40.383380] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:12.301 [2024-06-09 20:48:40.383447] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:12.301 [2024-06-09 20:48:40.383496] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:12.301 [2024-06-09 20:48:40.383656] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:12.301 passed 00:06:12.301 Test: lvol_create_destroy_success ...passed 00:06:12.301 Test: lvol_create_fail ...[2024-06-09 20:48:40.384182] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:12.301 passed 00:06:12.301 Test: lvol_destroy_fail ...[2024-06-09 20:48:40.384322] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:12.301 passed 00:06:12.301 Test: lvol_close ...[2024-06-09 20:48:40.384631] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:12.301 passed 00:06:12.301 Test: lvol_resize ...[2024-06-09 20:48:40.384821] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:12.301 [2024-06-09 20:48:40.384890] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:12.301 passed 00:06:12.301 Test: lvol_set_read_only ...passed 00:06:12.301 Test: test_lvs_load ...[2024-06-09 20:48:40.385743] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:12.301 passed 00:06:12.301 Test: lvols_load ...[2024-06-09 20:48:40.385806] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:12.301 [2024-06-09 20:48:40.386098] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:12.301 passed 00:06:12.301 Test: lvol_open ...[2024-06-09 20:48:40.386216] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:12.301 passed 00:06:12.301 Test: lvol_snapshot ...passed 00:06:12.301 Test: lvol_snapshot_fail ...passed 00:06:12.301 Test: lvol_clone ...[2024-06-09 20:48:40.386911] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:12.301 passed 00:06:12.301 Test: lvol_clone_fail ...passed 00:06:12.301 Test: lvol_iter_clones ...[2024-06-09 20:48:40.387436] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:12.301 passed 00:06:12.301 Test: lvol_refcnt ...[2024-06-09 20:48:40.387929] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 39745b63-ce38-4d15-9e07-a239be5584e3 because it is still open 00:06:12.301 passed 00:06:12.301 Test: lvol_names ...[2024-06-09 20:48:40.388166] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:12.301 [2024-06-09 20:48:40.388267] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:12.301 [2024-06-09 20:48:40.388484] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:12.302 passed 00:06:12.302 Test: lvol_create_thin_provisioned ...passed 00:06:12.302 Test: lvol_rename ...[2024-06-09 20:48:40.388866] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:12.302 [2024-06-09 20:48:40.388963] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:12.302 passed 00:06:12.302 Test: lvs_rename ...passed 00:06:12.302 Test: lvol_inflate ...[2024-06-09 20:48:40.389216] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:12.302 [2024-06-09 20:48:40.389427] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:12.302 passed 00:06:12.302 Test: lvol_decouple_parent ...[2024-06-09 20:48:40.389699] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:12.302 passed 00:06:12.302 Test: lvol_get_xattr ...passed 00:06:12.302 Test: lvol_esnap_reload ...passed 00:06:12.302 Test: lvol_esnap_create_bad_args ...[2024-06-09 20:48:40.390174] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:12.302 [2024-06-09 20:48:40.390229] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:12.302 [2024-06-09 20:48:40.390291] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:12.302 [2024-06-09 20:48:40.390417] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:12.302 passed 00:06:12.302 Test: lvol_esnap_create_delete ...[2024-06-09 20:48:40.390543] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:12.302 passed 00:06:12.302 Test: lvol_esnap_load_esnaps ...passed 00:06:12.302 Test: lvol_esnap_missing ...[2024-06-09 20:48:40.390811] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:12.302 [2024-06-09 20:48:40.390960] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:12.302 [2024-06-09 20:48:40.391018] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:12.302 passed 00:06:12.302 Test: lvol_esnap_hotplug ... 00:06:12.302 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:12.302 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:12.302 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:12.302 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:12.302 [2024-06-09 20:48:40.391675] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d89e9459-28b0-438f-ae20-0681ea4fbbef: failed to create esnap bs_dev: error -12 00:06:12.302 [2024-06-09 20:48:40.391889] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7cf8e268-f090-4909-adb7-d5d1985a615d: failed to create esnap bs_dev: error -12 00:06:12.302 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:12.302 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:12.302 [2024-06-09 20:48:40.392002] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a9ed1fdf-bba2-4bb0-9868-6d9d3c2b4078: failed to create esnap bs_dev: error -12 00:06:12.302 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:12.302 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:12.302 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:12.302 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:12.302 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:12.302 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:12.302 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:12.302 passed 00:06:12.302 Test: lvol_get_by ...passed 00:06:12.302 00:06:12.302 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.302 suites 1 1 n/a 0 0 00:06:12.302 tests 34 34 34 0 0 00:06:12.302 asserts 1439 1439 1439 0 n/a 00:06:12.302 00:06:12.302 Elapsed time = 0.011 seconds 00:06:12.302 00:06:12.302 real 0m0.050s 00:06:12.302 user 0m0.017s 00:06:12.302 sys 0m0.033s 00:06:12.302 20:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.302 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.302 ************************************ 00:06:12.302 END TEST unittest_lvol 00:06:12.302 ************************************ 00:06:12.302 20:48:40 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:12.302 20:48:40 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:12.302 20:48:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.302 20:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.302 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.302 ************************************ 00:06:12.302 START TEST unittest_nvme_rdma 00:06:12.302 ************************************ 00:06:12.302 20:48:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:12.564 00:06:12.564 00:06:12.564 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.564 http://cunit.sourceforge.net/ 00:06:12.564 00:06:12.564 00:06:12.564 Suite: nvme_rdma 00:06:12.564 Test: test_nvme_rdma_build_sgl_request ...[2024-06-09 20:48:40.482209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:12.564 passed 00:06:12.564 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:12.564 Test: test_nvme_rdma_build_contig_request ...passed 00:06:12.564 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:12.564 Test: test_nvme_rdma_create_reqs ...[2024-06-09 20:48:40.482631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:12.564 [2024-06-09 20:48:40.482765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:12.564 [2024-06-09 20:48:40.482900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:12.564 [2024-06-09 20:48:40.483042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:12.564 passed 00:06:12.564 Test: test_nvme_rdma_create_rsps ...[2024-06-09 20:48:40.483463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:12.564 passed 00:06:12.564 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-06-09 20:48:40.483694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:12.564 [2024-06-09 20:48:40.483778] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:12.564 passed 00:06:12.564 Test: test_nvme_rdma_poller_create ...passed 00:06:12.564 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:06:12.564 Test: test_nvme_rdma_ctrlr_construct ...[2024-06-09 20:48:40.484024] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:12.564 passed 00:06:12.564 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:12.564 Test: test_nvme_rdma_req_init ...passed 00:06:12.564 Test: test_nvme_rdma_validate_cm_event ...passed 00:06:12.564 Test: test_nvme_rdma_qpair_init ...passed 00:06:12.564 Test: test_nvme_rdma_qpair_submit_request ...[2024-06-09 20:48:40.484437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:12.564 [2024-06-09 20:48:40.484529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:12.564 passed 00:06:12.564 Test: test_nvme_rdma_memory_domain ...passed 00:06:12.564 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:12.564 Test: test_rdma_get_memory_translation ...[2024-06-09 20:48:40.484743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:12.564 [2024-06-09 20:48:40.484865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:12.564 [2024-06-09 20:48:40.484949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:12.564 passed 00:06:12.564 Test: test_get_rdma_qpair_from_wc ...passed 00:06:12.565 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:12.565 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:06:12.565 Test: test_nvme_rdma_qpair_set_poller ...[2024-06-09 20:48:40.485069] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:12.565 [2024-06-09 20:48:40.485135] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:12.565 [2024-06-09 20:48:40.485263] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:12.565 [2024-06-09 20:48:40.485328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:12.565 [2024-06-09 20:48:40.485387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffeab15c3c0 on poll group 0x60b0000001a0 00:06:12.565 [2024-06-09 20:48:40.485467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:12.565 [2024-06-09 20:48:40.485577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:12.565 [2024-06-09 20:48:40.485637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffeab15c3c0 on poll group 0x60b0000001a0 00:06:12.565 [2024-06-09 20:48:40.485759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:12.565 passed 00:06:12.565 00:06:12.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.565 suites 1 1 n/a 0 0 00:06:12.565 tests 22 22 22 0 0 00:06:12.565 asserts 412 412 412 0 n/a 00:06:12.565 00:06:12.565 Elapsed time = 0.004 seconds 00:06:12.565 00:06:12.565 real 0m0.034s 00:06:12.565 user 0m0.014s 00:06:12.565 sys 0m0.021s 00:06:12.565 ************************************ 00:06:12.565 END TEST unittest_nvme_rdma 00:06:12.565 ************************************ 00:06:12.565 20:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.565 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 20:48:40 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:12.565 20:48:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.565 20:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.565 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 ************************************ 00:06:12.565 START TEST unittest_nvmf_transport 00:06:12.565 ************************************ 00:06:12.565 20:48:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:12.565 00:06:12.565 00:06:12.565 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.565 http://cunit.sourceforge.net/ 00:06:12.565 00:06:12.565 00:06:12.565 Suite: nvmf 00:06:12.565 Test: test_spdk_nvmf_transport_create ...[2024-06-09 20:48:40.575892] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:12.565 [2024-06-09 20:48:40.576229] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:12.565 [2024-06-09 20:48:40.576296] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:12.565 passed 00:06:12.565 Test: test_nvmf_transport_poll_group_create ...[2024-06-09 20:48:40.576427] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:12.565 passed 00:06:12.565 Test: test_spdk_nvmf_transport_opts_init ...passed 00:06:12.565 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:12.565 00:06:12.565 [2024-06-09 20:48:40.576743] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:12.565 [2024-06-09 20:48:40.576831] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:12.565 [2024-06-09 20:48:40.576866] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:12.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.565 suites 1 1 n/a 0 0 00:06:12.565 tests 4 4 4 0 0 00:06:12.565 asserts 49 49 49 0 n/a 00:06:12.565 00:06:12.565 Elapsed time = 0.001 seconds 00:06:12.565 00:06:12.565 real 0m0.040s 00:06:12.565 user 0m0.020s 00:06:12.565 sys 0m0.020s 00:06:12.565 20:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.565 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 ************************************ 00:06:12.565 END TEST unittest_nvmf_transport 00:06:12.565 ************************************ 00:06:12.565 20:48:40 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:12.565 20:48:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.565 20:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.565 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 ************************************ 00:06:12.565 START TEST unittest_rdma 00:06:12.565 ************************************ 00:06:12.565 20:48:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:12.565 00:06:12.565 00:06:12.565 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.565 http://cunit.sourceforge.net/ 00:06:12.565 00:06:12.565 00:06:12.565 Suite: rdma_common 00:06:12.565 Test: test_spdk_rdma_pd ...[2024-06-09 20:48:40.662629] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:12.565 [2024-06-09 20:48:40.663003] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:12.565 passed 00:06:12.565 00:06:12.565 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.565 suites 1 1 n/a 0 0 00:06:12.565 tests 1 1 1 0 0 00:06:12.565 asserts 31 31 31 0 n/a 00:06:12.565 00:06:12.565 Elapsed time = 0.001 seconds 00:06:12.565 00:06:12.565 real 0m0.030s 00:06:12.565 user 0m0.025s 00:06:12.565 sys 0m0.005s 00:06:12.565 ************************************ 00:06:12.565 END TEST unittest_rdma 00:06:12.565 ************************************ 00:06:12.565 20:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.565 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 20:48:40 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:12.565 20:48:40 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:12.565 20:48:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.565 20:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.565 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.565 ************************************ 00:06:12.565 START TEST unittest_nvme_cuse 00:06:12.565 ************************************ 00:06:12.565 20:48:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:12.825 00:06:12.825 00:06:12.825 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.825 http://cunit.sourceforge.net/ 00:06:12.825 00:06:12.825 00:06:12.825 Suite: nvme_cuse 00:06:12.825 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:12.825 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:12.825 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:12.825 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:12.825 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:12.825 Test: test_cuse_nvme_submit_io ...passed 00:06:12.825 Test: test_cuse_nvme_reset ...[2024-06-09 20:48:40.747305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:12.825 passed 00:06:12.825 Test: test_nvme_cuse_stop ...passed 00:06:12.825 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:12.825 00:06:12.825 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.825 suites 1 1 n/a 0 0 00:06:12.825 tests 9 9 9 0 0 00:06:12.825 asserts 121 121 121 0 n/a 00:06:12.825 00:06:12.825 Elapsed time = 0.001 seconds 00:06:12.825 [2024-06-09 20:48:40.747633] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:12.825 00:06:12.825 real 0m0.029s 00:06:12.825 user 0m0.019s 00:06:12.825 sys 0m0.011s 00:06:12.825 ************************************ 00:06:12.825 END TEST unittest_nvme_cuse 00:06:12.825 ************************************ 00:06:12.825 20:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.825 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.825 20:48:40 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:06:12.825 20:48:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.825 20:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.825 20:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.825 ************************************ 00:06:12.825 START TEST unittest_nvmf 00:06:12.825 ************************************ 00:06:12.825 20:48:40 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:06:12.825 20:48:40 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:12.825 00:06:12.825 00:06:12.825 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.826 http://cunit.sourceforge.net/ 00:06:12.826 00:06:12.826 00:06:12.826 Suite: nvmf 00:06:12.826 Test: test_get_log_page ...passed 00:06:12.826 Test: test_process_fabrics_cmd ...passed 00:06:12.826 Test: test_connect ...[2024-06-09 20:48:40.831438] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:12.826 [2024-06-09 20:48:40.832213] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:12.826 [2024-06-09 20:48:40.832319] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:12.826 [2024-06-09 20:48:40.832372] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:12.826 [2024-06-09 20:48:40.832434] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:12.826 [2024-06-09 20:48:40.832529] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:12.826 [2024-06-09 20:48:40.832571] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:12.826 [2024-06-09 20:48:40.832680] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:12.826 [2024-06-09 20:48:40.832727] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:12.826 [2024-06-09 20:48:40.832824] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:12.826 [2024-06-09 20:48:40.832909] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:12.826 [2024-06-09 20:48:40.833162] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:12.826 [2024-06-09 20:48:40.833260] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:12.826 [2024-06-09 20:48:40.833355] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:12.826 [2024-06-09 20:48:40.833428] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:12.826 [2024-06-09 20:48:40.833551] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:12.826 passed 00:06:12.826 Test: test_get_ns_id_desc_list ...passed 00:06:12.826 Test: test_identify_ns ...[2024-06-09 20:48:40.833709] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:12.826 [2024-06-09 20:48:40.833953] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:12.826 [2024-06-09 20:48:40.834198] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:12.826 passed 00:06:12.826 Test: test_identify_ns_iocs_specific ...[2024-06-09 20:48:40.834351] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:12.826 [2024-06-09 20:48:40.834493] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:12.826 [2024-06-09 20:48:40.834787] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:12.826 passed 00:06:12.826 Test: test_reservation_write_exclusive ...passed 00:06:12.826 Test: test_reservation_exclusive_access ...passed 00:06:12.826 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:12.826 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:12.826 Test: test_reservation_notification_log_page ...passed 00:06:12.826 Test: test_get_dif_ctx ...passed 00:06:12.826 Test: test_set_get_features ...[2024-06-09 20:48:40.835252] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:12.826 [2024-06-09 20:48:40.835309] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:12.826 passed 00:06:12.826 Test: test_identify_ctrlr ...passed 00:06:12.826 Test: test_identify_ctrlr_iocs_specific ...[2024-06-09 20:48:40.835359] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:12.826 [2024-06-09 20:48:40.835414] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:12.826 passed 00:06:12.826 Test: test_custom_admin_cmd ...passed 00:06:12.826 Test: test_fused_compare_and_write ...passed 00:06:12.826 Test: test_multi_async_event_reqs ...passed 00:06:12.826 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:06:12.826 Test: test_get_ana_log_page_multi_ns_per_anagrp ...[2024-06-09 20:48:40.835859] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:12.826 [2024-06-09 20:48:40.835917] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:12.826 [2024-06-09 20:48:40.835966] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:12.826 passed 00:06:12.826 Test: test_multi_async_events ...passed 00:06:12.826 Test: test_rae ...passed 00:06:12.826 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:12.826 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:12.826 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:06:12.826 Test: test_zcopy_read ...passed 00:06:12.826 Test: test_zcopy_write ...passed 00:06:12.826 Test: test_nvmf_property_set ...passed 00:06:12.826 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-06-09 20:48:40.836405] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:12.826 [2024-06-09 20:48:40.836560] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:12.826 passed 00:06:12.826 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:06:12.826 00:06:12.826 [2024-06-09 20:48:40.836647] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:12.826 [2024-06-09 20:48:40.836709] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:12.826 [2024-06-09 20:48:40.836755] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:12.826 [2024-06-09 20:48:40.836793] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:12.826 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.826 suites 1 1 n/a 0 0 00:06:12.826 tests 30 30 30 0 0 00:06:12.826 asserts 885 885 885 0 n/a 00:06:12.826 00:06:12.826 Elapsed time = 0.005 seconds 00:06:12.826 20:48:40 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:12.826 00:06:12.826 00:06:12.826 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.826 http://cunit.sourceforge.net/ 00:06:12.826 00:06:12.826 00:06:12.826 Suite: nvmf 00:06:12.826 Test: test_get_rw_params ...passed 00:06:12.826 Test: test_lba_in_range ...passed 00:06:12.826 Test: test_get_dif_ctx ...passed 00:06:12.826 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:12.826 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-06-09 20:48:40.869955] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:12.826 [2024-06-09 20:48:40.870267] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:12.826 passed 00:06:12.826 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:06:12.826 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-06-09 20:48:40.870361] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:12.826 [2024-06-09 20:48:40.870424] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:12.826 [2024-06-09 20:48:40.870509] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:12.826 [2024-06-09 20:48:40.870620] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:12.826 [2024-06-09 20:48:40.870668] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:12.826 passed 00:06:12.826 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:12.826 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:12.826 00:06:12.826 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.826 suites 1 1 n/a 0 0 00:06:12.826 tests 9 9 9 0 0 00:06:12.826 asserts 157 157 157 0 n/a 00:06:12.826 00:06:12.826 Elapsed time = 0.001 seconds 00:06:12.826 [2024-06-09 20:48:40.870732] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:12.826 [2024-06-09 20:48:40.870770] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:12.826 20:48:40 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:12.826 00:06:12.826 00:06:12.826 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.826 http://cunit.sourceforge.net/ 00:06:12.826 00:06:12.826 00:06:12.826 Suite: nvmf 00:06:12.826 Test: test_discovery_log ...passed 00:06:12.826 Test: test_discovery_log_with_filters ...passed 00:06:12.826 00:06:12.826 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.826 suites 1 1 n/a 0 0 00:06:12.826 tests 2 2 2 0 0 00:06:12.826 asserts 238 238 238 0 n/a 00:06:12.826 00:06:12.826 Elapsed time = 0.003 seconds 00:06:12.826 20:48:40 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:12.826 00:06:12.826 00:06:12.826 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.826 http://cunit.sourceforge.net/ 00:06:12.826 00:06:12.826 00:06:12.826 Suite: nvmf 00:06:12.826 Test: nvmf_test_create_subsystem ...[2024-06-09 20:48:40.946064] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:12.827 [2024-06-09 20:48:40.946449] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:12.827 [2024-06-09 20:48:40.946538] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:12.827 [2024-06-09 20:48:40.946582] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:12.827 [2024-06-09 20:48:40.946618] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:12.827 [2024-06-09 20:48:40.946661] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:12.827 [2024-06-09 20:48:40.946787] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:12.827 [2024-06-09 20:48:40.946960] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:12.827 [2024-06-09 20:48:40.947063] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:12.827 passed 00:06:12.827 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-06-09 20:48:40.947108] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:12.827 [2024-06-09 20:48:40.947142] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:12.827 [2024-06-09 20:48:40.947282] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:12.827 [2024-06-09 20:48:40.947379] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:12.827 passed 00:06:12.827 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:12.827 Test: test_reservation_register ...[2024-06-09 20:48:40.947604] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 passed 00:06:12.827 Test: test_reservation_register_with_ptpl ...[2024-06-09 20:48:40.947715] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:12.827 passed 00:06:12.827 Test: test_reservation_acquire_preempt_1 ...passed 00:06:12.827 Test: test_reservation_acquire_release_with_ptpl ...[2024-06-09 20:48:40.948634] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 passed 00:06:12.827 Test: test_reservation_release ...passed 00:06:12.827 Test: test_reservation_unregister_notification ...[2024-06-09 20:48:40.950384] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 [2024-06-09 20:48:40.950678] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 passed 00:06:12.827 Test: test_reservation_release_notification ...passed 00:06:12.827 Test: test_reservation_release_notification_write_exclusive ...[2024-06-09 20:48:40.950958] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 [2024-06-09 20:48:40.951178] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 passed 00:06:12.827 Test: test_reservation_clear_notification ...passed 00:06:12.827 Test: test_reservation_preempt_notification ...[2024-06-09 20:48:40.951411] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 passed 00:06:12.827 Test: test_spdk_nvmf_ns_event ...[2024-06-09 20:48:40.951636] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:12.827 passed 00:06:12.827 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:12.827 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:12.827 Test: test_spdk_nvmf_subsystem_add_host ...[2024-06-09 20:48:40.952404] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:12.827 passed 00:06:12.827 Test: test_nvmf_ns_reservation_report ...passed 00:06:12.827 Test: test_nvmf_nqn_is_valid ...[2024-06-09 20:48:40.952514] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:12.827 [2024-06-09 20:48:40.952642] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:12.827 passed 00:06:12.827 Test: test_nvmf_ns_reservation_restore ...[2024-06-09 20:48:40.952718] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:12.827 [2024-06-09 20:48:40.952763] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:04a6eb06-3e40-4738-a13a-22eb4ff3085": uuid is not the correct length 00:06:12.827 [2024-06-09 20:48:40.952813] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:12.827 [2024-06-09 20:48:40.952919] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:12.827 passed 00:06:12.827 Test: test_nvmf_subsystem_state_change ...passed 00:06:12.827 Test: test_nvmf_reservation_custom_ops ...passed 00:06:12.827 00:06:12.827 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.827 suites 1 1 n/a 0 0 00:06:12.827 tests 22 22 22 0 0 00:06:12.827 asserts 407 407 407 0 n/a 00:06:12.827 00:06:12.827 Elapsed time = 0.008 seconds 00:06:12.827 20:48:40 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:12.827 00:06:12.827 00:06:12.827 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.827 http://cunit.sourceforge.net/ 00:06:12.827 00:06:12.827 00:06:12.827 Suite: nvmf 00:06:13.087 Test: test_nvmf_tcp_create ...[2024-06-09 20:48:41.007228] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:13.087 passed 00:06:13.087 Test: test_nvmf_tcp_destroy ...passed 00:06:13.087 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:13.087 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:13.087 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:13.087 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:13.087 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:13.087 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-06-09 20:48:41.106711] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 passed 00:06:13.087 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:06:13.087 Test: test_nvmf_tcp_icreq_handle ...[2024-06-09 20:48:41.106796] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.106913] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.106971] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.107026] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.107117] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:13.087 [2024-06-09 20:48:41.107225] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.107308] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.107354] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:13.087 [2024-06-09 20:48:41.107397] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.107433] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.107474] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.107514] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.107572] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 passed 00:06:13.087 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:13.087 Test: test_nvmf_tcp_invalid_sgl ...passed 00:06:13.087 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-06-09 20:48:41.107713] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:13.087 [2024-06-09 20:48:41.107762] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.107804] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e48d0 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.107864] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe950e5630 00:06:13.087 [2024-06-09 20:48:41.107964] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.108018] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.108066] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe950e4d90 00:06:13.087 [2024-06-09 20:48:41.108101] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.108143] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.108183] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:13.087 [2024-06-09 20:48:41.108233] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.108285] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.108331] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:13.087 [2024-06-09 20:48:41.108371] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.108421] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.108473] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.087 [2024-06-09 20:48:41.108517] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.087 [2024-06-09 20:48:41.108585] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.088 [2024-06-09 20:48:41.108630] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.088 [2024-06-09 20:48:41.108704] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.088 [2024-06-09 20:48:41.108751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.088 [2024-06-09 20:48:41.108792] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.088 [2024-06-09 20:48:41.108828] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.088 passed 00:06:13.088 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-06-09 20:48:41.108887] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.088 [2024-06-09 20:48:41.108926] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.088 [2024-06-09 20:48:41.108982] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:13.088 [2024-06-09 20:48:41.109025] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe950e4d90 is same with the state(5) to be set 00:06:13.088 passed 00:06:13.088 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-06-09 20:48:41.132595] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:13.088 [2024-06-09 20:48:41.132675] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:13.088 passed 00:06:13.088 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:06:13.088 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-06-09 20:48:41.133091] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:13.088 [2024-06-09 20:48:41.133155] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:13.088 passed 00:06:13.088 00:06:13.088 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.088 suites 1 1 n/a 0 0 00:06:13.088 tests 17 17 17 0 0 00:06:13.088 asserts 222 222 222 0 n/a 00:06:13.088 00:06:13.088 Elapsed time = 0.150 seconds 00:06:13.088 [2024-06-09 20:48:41.133409] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:13.088 [2024-06-09 20:48:41.133488] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:13.088 20:48:41 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:13.088 00:06:13.088 00:06:13.088 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.088 http://cunit.sourceforge.net/ 00:06:13.088 00:06:13.088 00:06:13.088 Suite: nvmf 00:06:13.088 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:13.088 00:06:13.088 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.088 suites 1 1 n/a 0 0 00:06:13.088 tests 1 1 1 0 0 00:06:13.088 asserts 17 17 17 0 n/a 00:06:13.088 00:06:13.088 Elapsed time = 0.022 seconds 00:06:13.347 00:06:13.347 real 0m0.474s 00:06:13.347 user 0m0.264s 00:06:13.347 sys 0m0.212s 00:06:13.347 20:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.347 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.347 ************************************ 00:06:13.347 END TEST unittest_nvmf 00:06:13.347 ************************************ 00:06:13.347 20:48:41 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.347 20:48:41 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.347 20:48:41 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:13.347 20:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.347 20:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.347 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.347 ************************************ 00:06:13.347 START TEST unittest_nvmf_rdma 00:06:13.347 ************************************ 00:06:13.347 20:48:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:13.347 00:06:13.347 00:06:13.347 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.347 http://cunit.sourceforge.net/ 00:06:13.347 00:06:13.347 00:06:13.347 Suite: nvmf 00:06:13.347 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-06-09 20:48:41.365199] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:13.347 [2024-06-09 20:48:41.365567] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:13.347 [2024-06-09 20:48:41.365631] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:13.347 passed 00:06:13.347 Test: test_spdk_nvmf_rdma_request_process ...passed 00:06:13.347 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:13.347 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:13.347 Test: test_nvmf_rdma_opts_init ...passed 00:06:13.347 Test: test_nvmf_rdma_request_free_data ...passed 00:06:13.347 Test: test_nvmf_rdma_update_ibv_state ...[2024-06-09 20:48:41.367015] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:13.347 [2024-06-09 20:48:41.367075] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:13.347 passed 00:06:13.347 Test: test_nvmf_rdma_resources_create ...passed 00:06:13.347 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:13.347 Test: test_nvmf_rdma_resize_cq ...[2024-06-09 20:48:41.368263] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:13.347 Using CQ of insufficient size may lead to CQ overrun 00:06:13.347 passed 00:06:13.347 00:06:13.347 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.347 suites 1 1 n/a 0 0 00:06:13.347 tests 10 10 10 0 0 00:06:13.347 asserts 584 584 584 0 n/a 00:06:13.347 00:06:13.347 Elapsed time = 0.003 seconds 00:06:13.347 [2024-06-09 20:48:41.368384] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:13.347 [2024-06-09 20:48:41.368446] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:13.347 00:06:13.347 real 0m0.039s 00:06:13.347 user 0m0.023s 00:06:13.347 sys 0m0.016s 00:06:13.347 ************************************ 00:06:13.347 END TEST unittest_nvmf_rdma 00:06:13.347 ************************************ 00:06:13.348 20:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.348 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.348 20:48:41 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.348 20:48:41 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:06:13.348 20:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.348 20:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.348 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.348 ************************************ 00:06:13.348 START TEST unittest_scsi 00:06:13.348 ************************************ 00:06:13.348 20:48:41 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:06:13.348 20:48:41 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:13.348 00:06:13.348 00:06:13.348 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.348 http://cunit.sourceforge.net/ 00:06:13.348 00:06:13.348 00:06:13.348 Suite: dev_suite 00:06:13.348 Test: dev_destruct_null_dev ...passed 00:06:13.348 Test: dev_destruct_zero_luns ...passed 00:06:13.348 Test: dev_destruct_null_lun ...passed 00:06:13.348 Test: dev_destruct_success ...passed 00:06:13.348 Test: dev_construct_num_luns_zero ...passed 00:06:13.348 Test: dev_construct_no_lun_zero ...[2024-06-09 20:48:41.457157] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:13.348 [2024-06-09 20:48:41.457504] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:13.348 passed 00:06:13.348 Test: dev_construct_null_lun ...passed 00:06:13.348 Test: dev_construct_name_too_long ...passed 00:06:13.348 Test: dev_construct_success ...passed 00:06:13.348 Test: dev_construct_success_lun_zero_not_first ...passed 00:06:13.348 Test: dev_queue_mgmt_task_success ...passed 00:06:13.348 Test: dev_queue_task_success ...passed 00:06:13.348 Test: dev_stop_success ...passed 00:06:13.348 Test: dev_add_port_max_ports ...[2024-06-09 20:48:41.457586] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:13.348 [2024-06-09 20:48:41.457636] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:13.348 [2024-06-09 20:48:41.457928] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:13.348 passed 00:06:13.348 Test: dev_add_port_construct_failure1 ...passed 00:06:13.348 Test: dev_add_port_construct_failure2 ...passed 00:06:13.348 Test: dev_add_port_success1 ...passed 00:06:13.348 Test: dev_add_port_success2 ...passed 00:06:13.348 Test: dev_add_port_success3 ...passed 00:06:13.348 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:13.348 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:13.348 Test: dev_find_port_by_id_success ...passed 00:06:13.348 Test: dev_add_lun_bdev_not_found ...passed 00:06:13.348 Test: dev_add_lun_no_free_lun_id ...[2024-06-09 20:48:41.458052] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:13.348 [2024-06-09 20:48:41.458142] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:13.348 [2024-06-09 20:48:41.458509] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:13.348 passed 00:06:13.348 Test: dev_add_lun_success1 ...passed 00:06:13.348 Test: dev_add_lun_success2 ...passed 00:06:13.348 Test: dev_check_pending_tasks ...passed 00:06:13.348 Test: dev_iterate_luns ...passed 00:06:13.348 Test: dev_find_free_lun ...passed 00:06:13.348 00:06:13.348 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.348 suites 1 1 n/a 0 0 00:06:13.348 tests 29 29 29 0 0 00:06:13.348 asserts 97 97 97 0 n/a 00:06:13.348 00:06:13.348 Elapsed time = 0.002 seconds 00:06:13.348 20:48:41 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:13.348 00:06:13.348 00:06:13.348 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.348 http://cunit.sourceforge.net/ 00:06:13.348 00:06:13.348 00:06:13.348 Suite: lun_suite 00:06:13.348 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:06:13.348 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:06:13.348 Test: lun_task_mgmt_execute_lun_reset ...passed[2024-06-09 20:48:41.493155] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:13.348 [2024-06-09 20:48:41.493491] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:13.348 00:06:13.348 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:13.348 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:13.348 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:13.348 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:13.348 Test: lun_append_task_null_lun_not_supported ...passed 00:06:13.348 Test: lun_execute_scsi_task_pending ...passed 00:06:13.348 Test: lun_execute_scsi_task_complete ...passed 00:06:13.348 Test: lun_execute_scsi_task_resize ...passed 00:06:13.348 Test: lun_destruct_success ...passed 00:06:13.348 Test: lun_construct_null_ctx ...passed 00:06:13.348 Test: lun_construct_success ...passed 00:06:13.348 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:13.348 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:13.348 Test: lun_check_pending_tasks_only_for_specific_initiator ...[2024-06-09 20:48:41.493760] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:13.348 [2024-06-09 20:48:41.493970] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:13.348 passed 00:06:13.348 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:13.348 00:06:13.348 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.348 suites 1 1 n/a 0 0 00:06:13.348 tests 18 18 18 0 0 00:06:13.348 asserts 153 153 153 0 n/a 00:06:13.348 00:06:13.348 Elapsed time = 0.001 seconds 00:06:13.348 20:48:41 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:13.607 00:06:13.607 00:06:13.607 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.607 http://cunit.sourceforge.net/ 00:06:13.607 00:06:13.607 00:06:13.607 Suite: scsi_suite 00:06:13.607 Test: scsi_init ...passed 00:06:13.607 00:06:13.607 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.607 suites 1 1 n/a 0 0 00:06:13.607 tests 1 1 1 0 0 00:06:13.607 asserts 1 1 1 0 n/a 00:06:13.607 00:06:13.607 Elapsed time = 0.000 seconds 00:06:13.607 20:48:41 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:13.607 00:06:13.607 00:06:13.607 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.607 http://cunit.sourceforge.net/ 00:06:13.607 00:06:13.607 00:06:13.607 Suite: translation_suite 00:06:13.607 Test: mode_select_6_test ...passed 00:06:13.607 Test: mode_select_6_test2 ...passed 00:06:13.607 Test: mode_sense_6_test ...passed 00:06:13.607 Test: mode_sense_10_test ...passed 00:06:13.607 Test: inquiry_evpd_test ...passed 00:06:13.607 Test: inquiry_standard_test ...passed 00:06:13.607 Test: inquiry_overflow_test ...passed 00:06:13.607 Test: task_complete_test ...passed 00:06:13.607 Test: lba_range_test ...passed 00:06:13.607 Test: xfer_len_test ...[2024-06-09 20:48:41.554846] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:13.607 passed 00:06:13.607 Test: xfer_test ...passed 00:06:13.607 Test: scsi_name_padding_test ...passed 00:06:13.607 Test: get_dif_ctx_test ...passed 00:06:13.607 Test: unmap_split_test ...passed 00:06:13.607 00:06:13.607 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.607 suites 1 1 n/a 0 0 00:06:13.607 tests 14 14 14 0 0 00:06:13.607 asserts 1200 1200 1200 0 n/a 00:06:13.607 00:06:13.607 Elapsed time = 0.004 seconds 00:06:13.607 20:48:41 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:13.607 00:06:13.608 00:06:13.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.608 http://cunit.sourceforge.net/ 00:06:13.608 00:06:13.608 00:06:13.608 Suite: reservation_suite 00:06:13.608 Test: test_reservation_register ...passed 00:06:13.608 Test: test_reservation_reserve ...[2024-06-09 20:48:41.578933] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:13.608 [2024-06-09 20:48:41.579263] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:13.608 [2024-06-09 20:48:41.579337] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:13.608 [2024-06-09 20:48:41.579433] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:13.608 passed 00:06:13.608 Test: test_reservation_preempt_non_all_regs ...passed 00:06:13.608 Test: test_reservation_preempt_all_regs ...passed 00:06:13.608 Test: test_reservation_cmds_conflict ...[2024-06-09 20:48:41.579498] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:13.608 [2024-06-09 20:48:41.579609] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:13.608 [2024-06-09 20:48:41.579733] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:13.608 [2024-06-09 20:48:41.579876] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:13.608 [2024-06-09 20:48:41.579952] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:13.608 [2024-06-09 20:48:41.579994] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:13.608 [2024-06-09 20:48:41.580032] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:13.608 [2024-06-09 20:48:41.580078] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:13.608 [2024-06-09 20:48:41.580114] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:13.608 passed 00:06:13.608 Test: test_scsi2_reserve_release ...passed 00:06:13.608 Test: test_pr_with_scsi2_reserve_release ...passed 00:06:13.608 00:06:13.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.608 suites 1 1 n/a 0 0 00:06:13.608 tests 7 7 7 0 0 00:06:13.608 asserts 257 257 257 0 n/a 00:06:13.608 00:06:13.608 Elapsed time = 0.001 seconds 00:06:13.608 [2024-06-09 20:48:41.580218] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:13.608 00:06:13.608 real 0m0.152s 00:06:13.608 user 0m0.102s 00:06:13.608 sys 0m0.052s 00:06:13.608 ************************************ 00:06:13.608 END TEST unittest_scsi 00:06:13.608 ************************************ 00:06:13.608 20:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.608 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.608 20:48:41 -- unit/unittest.sh@276 -- # uname -s 00:06:13.608 20:48:41 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:06:13.608 20:48:41 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:06:13.608 20:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.608 20:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.608 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.608 ************************************ 00:06:13.608 START TEST unittest_sock 00:06:13.608 ************************************ 00:06:13.608 20:48:41 -- common/autotest_common.sh@1104 -- # unittest_sock 00:06:13.608 20:48:41 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:13.608 00:06:13.608 00:06:13.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.608 http://cunit.sourceforge.net/ 00:06:13.608 00:06:13.608 00:06:13.608 Suite: sock 00:06:13.608 Test: posix_sock ...passed 00:06:13.608 Test: ut_sock ...passed 00:06:13.608 Test: posix_sock_group ...passed 00:06:13.608 Test: ut_sock_group ...passed 00:06:13.608 Test: posix_sock_group_fairness ...passed 00:06:13.608 Test: _posix_sock_close ...passed 00:06:13.608 Test: sock_get_default_opts ...passed 00:06:13.608 Test: ut_sock_impl_get_set_opts ...passed 00:06:13.608 Test: posix_sock_impl_get_set_opts ...passed 00:06:13.608 Test: ut_sock_map ...passed 00:06:13.608 Test: override_impl_opts ...passed 00:06:13.608 Test: ut_sock_group_get_ctx ...passed 00:06:13.608 00:06:13.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.608 suites 1 1 n/a 0 0 00:06:13.608 tests 12 12 12 0 0 00:06:13.608 asserts 349 349 349 0 n/a 00:06:13.608 00:06:13.608 Elapsed time = 0.008 seconds 00:06:13.608 20:48:41 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:13.608 00:06:13.608 00:06:13.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.608 http://cunit.sourceforge.net/ 00:06:13.608 00:06:13.608 00:06:13.608 Suite: posix 00:06:13.608 Test: flush ...passed 00:06:13.608 00:06:13.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.608 suites 1 1 n/a 0 0 00:06:13.608 tests 1 1 1 0 0 00:06:13.608 asserts 28 28 28 0 n/a 00:06:13.608 00:06:13.608 Elapsed time = 0.000 seconds 00:06:13.608 20:48:41 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:13.608 00:06:13.608 real 0m0.094s 00:06:13.608 user 0m0.039s 00:06:13.608 sys 0m0.032s 00:06:13.608 ************************************ 00:06:13.608 END TEST unittest_sock 00:06:13.608 ************************************ 00:06:13.608 20:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.608 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.867 20:48:41 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:13.867 20:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.867 20:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.867 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.867 ************************************ 00:06:13.867 START TEST unittest_thread 00:06:13.867 ************************************ 00:06:13.867 20:48:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:13.867 00:06:13.867 00:06:13.867 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.867 http://cunit.sourceforge.net/ 00:06:13.867 00:06:13.867 00:06:13.867 Suite: io_channel 00:06:13.867 Test: thread_alloc ...passed 00:06:13.867 Test: thread_send_msg ...passed 00:06:13.867 Test: thread_poller ...passed 00:06:13.867 Test: poller_pause ...passed 00:06:13.867 Test: thread_for_each ...passed 00:06:13.867 Test: for_each_channel_remove ...passed 00:06:13.867 Test: for_each_channel_unreg ...[2024-06-09 20:48:41.832929] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffdbcd5e210 already registered (old:0x613000000200 new:0x6130000003c0) 00:06:13.867 passed 00:06:13.867 Test: thread_name ...passed 00:06:13.867 Test: channel ...[2024-06-09 20:48:41.837021] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x5633c978c0e0 00:06:13.867 passed 00:06:13.867 Test: channel_destroy_races ...passed 00:06:13.868 Test: thread_exit_test ...[2024-06-09 20:48:41.842190] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:06:13.868 passed 00:06:13.868 Test: thread_update_stats_test ...passed 00:06:13.868 Test: nested_channel ...passed 00:06:13.868 Test: device_unregister_and_thread_exit_race ...passed 00:06:13.868 Test: cache_closest_timed_poller ...passed 00:06:13.868 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:13.868 Test: io_device_lookup ...passed 00:06:13.868 Test: spdk_spin ...[2024-06-09 20:48:41.853027] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:13.868 [2024-06-09 20:48:41.853090] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdbcd5e200 00:06:13.868 [2024-06-09 20:48:41.853187] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:13.868 [2024-06-09 20:48:41.854906] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:13.868 [2024-06-09 20:48:41.854993] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdbcd5e200 00:06:13.868 [2024-06-09 20:48:41.855048] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:13.868 [2024-06-09 20:48:41.855099] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdbcd5e200 00:06:13.868 [2024-06-09 20:48:41.855139] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:13.868 [2024-06-09 20:48:41.855178] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdbcd5e200 00:06:13.868 [2024-06-09 20:48:41.855210] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:13.868 [2024-06-09 20:48:41.855259] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdbcd5e200 00:06:13.868 passed 00:06:13.868 Test: for_each_channel_and_thread_exit_race ...passed 00:06:13.868 Test: for_each_thread_and_thread_exit_race ...passed 00:06:13.868 00:06:13.868 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.868 suites 1 1 n/a 0 0 00:06:13.868 tests 20 20 20 0 0 00:06:13.868 asserts 409 409 409 0 n/a 00:06:13.868 00:06:13.868 Elapsed time = 0.051 seconds 00:06:13.868 00:06:13.868 real 0m0.091s 00:06:13.868 user 0m0.068s 00:06:13.868 sys 0m0.024s 00:06:13.868 20:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.868 ************************************ 00:06:13.868 END TEST unittest_thread 00:06:13.868 ************************************ 00:06:13.868 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.868 20:48:41 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:13.868 20:48:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.868 20:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.868 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.868 ************************************ 00:06:13.868 START TEST unittest_iobuf 00:06:13.868 ************************************ 00:06:13.868 20:48:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:13.868 00:06:13.868 00:06:13.868 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.868 http://cunit.sourceforge.net/ 00:06:13.868 00:06:13.868 00:06:13.868 Suite: io_channel 00:06:13.868 Test: iobuf ...passed 00:06:13.868 Test: iobuf_cache ...[2024-06-09 20:48:41.961263] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:13.868 [2024-06-09 20:48:41.961577] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:13.868 [2024-06-09 20:48:41.961732] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:13.868 [2024-06-09 20:48:41.961802] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:13.868 [2024-06-09 20:48:41.961885] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:13.868 [2024-06-09 20:48:41.961948] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:13.868 passed 00:06:13.868 00:06:13.868 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.868 suites 1 1 n/a 0 0 00:06:13.868 tests 2 2 2 0 0 00:06:13.868 asserts 107 107 107 0 n/a 00:06:13.868 00:06:13.868 Elapsed time = 0.006 seconds 00:06:13.868 00:06:13.868 real 0m0.038s 00:06:13.868 user 0m0.019s 00:06:13.868 sys 0m0.020s 00:06:13.868 20:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.868 ************************************ 00:06:13.868 END TEST unittest_iobuf 00:06:13.868 ************************************ 00:06:13.868 20:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.868 20:48:42 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:06:13.868 20:48:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.868 20:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.868 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.868 ************************************ 00:06:13.868 START TEST unittest_util 00:06:13.868 ************************************ 00:06:13.868 20:48:42 -- common/autotest_common.sh@1104 -- # unittest_util 00:06:13.868 20:48:42 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:13.868 00:06:13.868 00:06:13.868 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.868 http://cunit.sourceforge.net/ 00:06:13.868 00:06:13.868 00:06:13.868 Suite: base64 00:06:13.868 Test: test_base64_get_encoded_strlen ...passed 00:06:13.868 Test: test_base64_get_decoded_len ...passed 00:06:13.868 Test: test_base64_encode ...passed 00:06:13.868 Test: test_base64_decode ...passed 00:06:13.868 Test: test_base64_urlsafe_encode ...passed 00:06:13.868 Test: test_base64_urlsafe_decode ...passed 00:06:13.868 00:06:13.868 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.868 suites 1 1 n/a 0 0 00:06:13.868 tests 6 6 6 0 0 00:06:13.868 asserts 112 112 112 0 n/a 00:06:13.868 00:06:13.868 Elapsed time = 0.000 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: bit_array 00:06:14.128 Test: test_1bit ...passed 00:06:14.128 Test: test_64bit ...passed 00:06:14.128 Test: test_find ...passed 00:06:14.128 Test: test_resize ...passed 00:06:14.128 Test: test_errors ...passed 00:06:14.128 Test: test_count ...passed 00:06:14.128 Test: test_mask_store_load ...passed 00:06:14.128 Test: test_mask_clear ...passed 00:06:14.128 00:06:14.128 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.128 suites 1 1 n/a 0 0 00:06:14.128 tests 8 8 8 0 0 00:06:14.128 asserts 5075 5075 5075 0 n/a 00:06:14.128 00:06:14.128 Elapsed time = 0.001 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: cpuset 00:06:14.128 Test: test_cpuset ...passed 00:06:14.128 Test: test_cpuset_parse ...[2024-06-09 20:48:42.106373] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:14.128 [2024-06-09 20:48:42.106749] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:14.128 [2024-06-09 20:48:42.106858] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:14.128 [2024-06-09 20:48:42.106959] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:14.128 [2024-06-09 20:48:42.107006] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:14.128 passed 00:06:14.128 Test: test_cpuset_fmt ...[2024-06-09 20:48:42.107060] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:14.128 [2024-06-09 20:48:42.107128] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:14.128 [2024-06-09 20:48:42.107210] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:14.128 passed 00:06:14.128 00:06:14.128 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.128 suites 1 1 n/a 0 0 00:06:14.128 tests 3 3 3 0 0 00:06:14.128 asserts 65 65 65 0 n/a 00:06:14.128 00:06:14.128 Elapsed time = 0.003 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: crc16 00:06:14.128 Test: test_crc16_t10dif ...passed 00:06:14.128 Test: test_crc16_t10dif_seed ...passed 00:06:14.128 Test: test_crc16_t10dif_copy ...passed 00:06:14.128 00:06:14.128 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.128 suites 1 1 n/a 0 0 00:06:14.128 tests 3 3 3 0 0 00:06:14.128 asserts 5 5 5 0 n/a 00:06:14.128 00:06:14.128 Elapsed time = 0.000 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: crc32_ieee 00:06:14.128 Test: test_crc32_ieee ...passed 00:06:14.128 00:06:14.128 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.128 suites 1 1 n/a 0 0 00:06:14.128 tests 1 1 1 0 0 00:06:14.128 asserts 1 1 1 0 n/a 00:06:14.128 00:06:14.128 Elapsed time = 0.000 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: crc32c 00:06:14.128 Test: test_crc32c ...passed 00:06:14.128 Test: test_crc32c_nvme ...passed 00:06:14.128 00:06:14.128 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.128 suites 1 1 n/a 0 0 00:06:14.128 tests 2 2 2 0 0 00:06:14.128 asserts 16 16 16 0 n/a 00:06:14.128 00:06:14.128 Elapsed time = 0.001 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: crc64 00:06:14.128 Test: test_crc64_nvme ...passed 00:06:14.128 00:06:14.128 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.128 suites 1 1 n/a 0 0 00:06:14.128 tests 1 1 1 0 0 00:06:14.128 asserts 4 4 4 0 n/a 00:06:14.128 00:06:14.128 Elapsed time = 0.000 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: string 00:06:14.128 Test: test_parse_ip_addr ...passed 00:06:14.128 Test: test_str_chomp ...passed 00:06:14.128 Test: test_parse_capacity ...passed 00:06:14.128 Test: test_sprintf_append_realloc ...passed 00:06:14.128 Test: test_strtol ...passed 00:06:14.128 Test: test_strtoll ...passed 00:06:14.128 Test: test_strarray ...passed 00:06:14.128 Test: test_strcpy_replace ...passed 00:06:14.128 00:06:14.128 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.128 suites 1 1 n/a 0 0 00:06:14.128 tests 8 8 8 0 0 00:06:14.128 asserts 161 161 161 0 n/a 00:06:14.128 00:06:14.128 Elapsed time = 0.001 seconds 00:06:14.128 20:48:42 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:14.128 00:06:14.128 00:06:14.128 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.128 http://cunit.sourceforge.net/ 00:06:14.128 00:06:14.128 00:06:14.128 Suite: dif 00:06:14.128 Test: dif_generate_and_verify_test ...[2024-06-09 20:48:42.272763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:14.129 [2024-06-09 20:48:42.273267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:14.129 [2024-06-09 20:48:42.273608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:14.129 [2024-06-09 20:48:42.273932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:14.129 [2024-06-09 20:48:42.274223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:14.129 [2024-06-09 20:48:42.274524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:14.129 passed 00:06:14.129 Test: dif_disable_check_test ...[2024-06-09 20:48:42.275551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:14.129 [2024-06-09 20:48:42.275906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:14.129 [2024-06-09 20:48:42.276201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:14.129 passed 00:06:14.129 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-06-09 20:48:42.277247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:14.129 [2024-06-09 20:48:42.277596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:14.129 [2024-06-09 20:48:42.277939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:14.129 [2024-06-09 20:48:42.278313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:14.129 [2024-06-09 20:48:42.278641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:14.129 [2024-06-09 20:48:42.278962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:14.129 [2024-06-09 20:48:42.279285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:14.129 [2024-06-09 20:48:42.279591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:14.129 [2024-06-09 20:48:42.279897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:14.129 [2024-06-09 20:48:42.280224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:14.129 [2024-06-09 20:48:42.280548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:14.129 passed 00:06:14.129 Test: dif_apptag_mask_test ...[2024-06-09 20:48:42.280872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:14.129 [2024-06-09 20:48:42.281177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:14.129 passed 00:06:14.129 Test: dif_sec_512_md_0_error_test ...passed 00:06:14.129 Test: dif_sec_4096_md_0_error_test ...passed 00:06:14.129 Test: dif_sec_4100_md_128_error_test ...[2024-06-09 20:48:42.281378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:14.129 [2024-06-09 20:48:42.281432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:14.129 [2024-06-09 20:48:42.281479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:14.129 [2024-06-09 20:48:42.281597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:14.129 passed 00:06:14.129 Test: dif_guard_seed_test ...passed 00:06:14.129 Test: dif_guard_value_test ...[2024-06-09 20:48:42.281644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:14.129 passed 00:06:14.129 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:14.129 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:14.129 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:14.129 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:14.391 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:14.391 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:14.391 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:14.391 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:14.391 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:14.391 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:14.391 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-09 20:48:42.326201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd6c, Actual=fd4c 00:06:14.391 [2024-06-09 20:48:42.328647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fe01, Actual=fe21 00:06:14.391 [2024-06-09 20:48:42.331107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.333579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.336091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.391 [2024-06-09 20:48:42.338596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.391 [2024-06-09 20:48:42.341056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a2d0 00:06:14.391 [2024-06-09 20:48:42.342418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fe21, Actual=cca4 00:06:14.391 [2024-06-09 20:48:42.343764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753cd, Actual=1ab753ed 00:06:14.391 [2024-06-09 20:48:42.346231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=38574640, Actual=38574660 00:06:14.391 [2024-06-09 20:48:42.348692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.351147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.353630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.391 [2024-06-09 20:48:42.356086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.391 [2024-06-09 20:48:42.358551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=1523257f 00:06:14.391 [2024-06-09 20:48:42.359899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=38574660, Actual=9b48028d 00:06:14.391 [2024-06-09 20:48:42.361266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.391 [2024-06-09 20:48:42.363736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=88010a0d4837a266, Actual=88010a2d4837a266 00:06:14.391 [2024-06-09 20:48:42.366207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.368663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.371132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.391 [2024-06-09 20:48:42.373589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.391 [2024-06-09 20:48:42.376057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.391 [2024-06-09 20:48:42.377398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=88010a2d4837a266, Actual=37c524170bdab5a7 00:06:14.391 passed 00:06:14.391 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-06-09 20:48:42.377858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:06:14.391 [2024-06-09 20:48:42.378174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:06:14.391 [2024-06-09 20:48:42.378471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.378770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.379091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.391 [2024-06-09 20:48:42.379387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.391 [2024-06-09 20:48:42.379678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a2d0 00:06:14.391 [2024-06-09 20:48:42.379934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cca4 00:06:14.391 [2024-06-09 20:48:42.380199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:06:14.391 [2024-06-09 20:48:42.380496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:06:14.391 [2024-06-09 20:48:42.380813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.381119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.381423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.391 [2024-06-09 20:48:42.381742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.391 [2024-06-09 20:48:42.382052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1523257f 00:06:14.391 [2024-06-09 20:48:42.382307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9b48028d 00:06:14.391 [2024-06-09 20:48:42.382587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.391 [2024-06-09 20:48:42.382880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a0d4837a266, Actual=88010a2d4837a266 00:06:14.391 [2024-06-09 20:48:42.383183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.383475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.383770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.391 [2024-06-09 20:48:42.384072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.391 [2024-06-09 20:48:42.384396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.391 [2024-06-09 20:48:42.384664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=37c524170bdab5a7 00:06:14.391 passed 00:06:14.391 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-06-09 20:48:42.384966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:06:14.391 [2024-06-09 20:48:42.385266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:06:14.391 [2024-06-09 20:48:42.385581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.385901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.391 [2024-06-09 20:48:42.386227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.391 [2024-06-09 20:48:42.386531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.391 [2024-06-09 20:48:42.386836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a2d0 00:06:14.391 [2024-06-09 20:48:42.387104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cca4 00:06:14.391 [2024-06-09 20:48:42.387368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:06:14.391 [2024-06-09 20:48:42.387673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:06:14.392 [2024-06-09 20:48:42.387971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.388271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.388566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.388868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.389164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1523257f 00:06:14.392 [2024-06-09 20:48:42.389423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9b48028d 00:06:14.392 [2024-06-09 20:48:42.389722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.392 [2024-06-09 20:48:42.390018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a0d4837a266, Actual=88010a2d4837a266 00:06:14.392 [2024-06-09 20:48:42.390309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.390617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.390927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.391227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.391550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.392 [2024-06-09 20:48:42.391810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=37c524170bdab5a7 00:06:14.392 passed 00:06:14.392 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-06-09 20:48:42.392112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:06:14.392 [2024-06-09 20:48:42.392422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:06:14.392 [2024-06-09 20:48:42.392733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.393041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.393365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.392 [2024-06-09 20:48:42.393670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.392 [2024-06-09 20:48:42.393988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a2d0 00:06:14.392 [2024-06-09 20:48:42.394253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cca4 00:06:14.392 [2024-06-09 20:48:42.394513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:06:14.392 [2024-06-09 20:48:42.394814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:06:14.392 [2024-06-09 20:48:42.395143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.395449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.395753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.396070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.396374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1523257f 00:06:14.392 [2024-06-09 20:48:42.396642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9b48028d 00:06:14.392 [2024-06-09 20:48:42.396912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.392 [2024-06-09 20:48:42.397219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a0d4837a266, Actual=88010a2d4837a266 00:06:14.392 [2024-06-09 20:48:42.397530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.397846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.398148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.398453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.398769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.392 [2024-06-09 20:48:42.399042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=37c524170bdab5a7 00:06:14.392 passed 00:06:14.392 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-06-09 20:48:42.399335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:06:14.392 [2024-06-09 20:48:42.399626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:06:14.392 [2024-06-09 20:48:42.399931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.400238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.400558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.392 [2024-06-09 20:48:42.400858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.392 [2024-06-09 20:48:42.401156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a2d0 00:06:14.392 [2024-06-09 20:48:42.401420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cca4 00:06:14.392 passed 00:06:14.392 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-06-09 20:48:42.401741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:06:14.392 [2024-06-09 20:48:42.402046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:06:14.392 [2024-06-09 20:48:42.402376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.402681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.402987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.403295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.403600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1523257f 00:06:14.392 [2024-06-09 20:48:42.403862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9b48028d 00:06:14.392 [2024-06-09 20:48:42.404158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.392 [2024-06-09 20:48:42.404464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a0d4837a266, Actual=88010a2d4837a266 00:06:14.392 [2024-06-09 20:48:42.404768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.405073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.405373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.405692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.392 [2024-06-09 20:48:42.406027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.392 [2024-06-09 20:48:42.406293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=37c524170bdab5a7 00:06:14.392 passed 00:06:14.392 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-06-09 20:48:42.406591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:06:14.392 [2024-06-09 20:48:42.406896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:06:14.392 [2024-06-09 20:48:42.407191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.407498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.407821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.392 [2024-06-09 20:48:42.408121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:06:14.392 [2024-06-09 20:48:42.408438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=a2d0 00:06:14.392 [2024-06-09 20:48:42.408689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=cca4 00:06:14.392 passed 00:06:14.392 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-06-09 20:48:42.408983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:06:14.392 [2024-06-09 20:48:42.409282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:06:14.392 [2024-06-09 20:48:42.409627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.409954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.392 [2024-06-09 20:48:42.410259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.393 [2024-06-09 20:48:42.410565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.393 [2024-06-09 20:48:42.410864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1523257f 00:06:14.393 [2024-06-09 20:48:42.411122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9b48028d 00:06:14.393 [2024-06-09 20:48:42.411429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.393 [2024-06-09 20:48:42.411729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a0d4837a266, Actual=88010a2d4837a266 00:06:14.393 [2024-06-09 20:48:42.412034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.412332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.412632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.393 [2024-06-09 20:48:42.412934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000000058 00:06:14.393 [2024-06-09 20:48:42.413259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.393 [2024-06-09 20:48:42.413558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=37c524170bdab5a7 00:06:14.393 passed 00:06:14.393 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:14.393 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:14.393 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:14.393 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:14.393 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:14.393 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:14.393 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:14.393 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:14.393 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:14.393 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-09 20:48:42.457667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd6c, Actual=fd4c 00:06:14.393 [2024-06-09 20:48:42.458806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ca06, Actual=ca26 00:06:14.393 [2024-06-09 20:48:42.459923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.461018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.462148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.393 [2024-06-09 20:48:42.463257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.393 [2024-06-09 20:48:42.464366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a2d0 00:06:14.393 [2024-06-09 20:48:42.465472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=a045 00:06:14.393 [2024-06-09 20:48:42.466607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753cd, Actual=1ab753ed 00:06:14.393 [2024-06-09 20:48:42.467717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2b3129d2, Actual=2b3129f2 00:06:14.393 [2024-06-09 20:48:42.468830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.469981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.471099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.472213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.473322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=1523257f 00:06:14.393 [2024-06-09 20:48:42.474454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=38a21480 00:06:14.393 [2024-06-09 20:48:42.475567] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.393 [2024-06-09 20:48:42.476694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=36615c7e1eb96cd5, Actual=36615c5e1eb96cd5 00:06:14.393 [2024-06-09 20:48:42.477821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.478943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.480053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.481172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.482291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.393 [2024-06-09 20:48:42.483419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f7ecce510aa922ae 00:06:14.393 passed 00:06:14.393 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-09 20:48:42.483786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd6c, Actual=fd4c 00:06:14.393 [2024-06-09 20:48:42.484059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ca06, Actual=ca26 00:06:14.393 [2024-06-09 20:48:42.484329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.484604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.484885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.393 [2024-06-09 20:48:42.485174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.393 [2024-06-09 20:48:42.485443] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a2d0 00:06:14.393 [2024-06-09 20:48:42.485748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=a045 00:06:14.393 [2024-06-09 20:48:42.486035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753cd, Actual=1ab753ed 00:06:14.393 [2024-06-09 20:48:42.486320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2b3129d2, Actual=2b3129f2 00:06:14.393 [2024-06-09 20:48:42.486596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.486869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.487147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.487429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.487696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=1523257f 00:06:14.393 [2024-06-09 20:48:42.487972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=38a21480 00:06:14.393 [2024-06-09 20:48:42.488255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.393 [2024-06-09 20:48:42.488532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=36615c7e1eb96cd5, Actual=36615c5e1eb96cd5 00:06:14.393 [2024-06-09 20:48:42.488811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.489089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.489368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.489655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.393 [2024-06-09 20:48:42.489964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.393 [2024-06-09 20:48:42.490245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f7ecce510aa922ae 00:06:14.393 passed 00:06:14.393 Test: dix_sec_512_md_0_error ...passed 00:06:14.393 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-06-09 20:48:42.490313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:14.393 passed 00:06:14.393 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:14.393 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:14.393 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:14.393 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:14.393 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:14.393 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:14.393 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:14.393 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:14.393 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-09 20:48:42.533725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd6c, Actual=fd4c 00:06:14.393 [2024-06-09 20:48:42.534849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ca06, Actual=ca26 00:06:14.393 [2024-06-09 20:48:42.535952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.537045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.393 [2024-06-09 20:48:42.538217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.393 [2024-06-09 20:48:42.539355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.393 [2024-06-09 20:48:42.540455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a2d0 00:06:14.394 [2024-06-09 20:48:42.541580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=a045 00:06:14.394 [2024-06-09 20:48:42.542730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753cd, Actual=1ab753ed 00:06:14.394 [2024-06-09 20:48:42.543835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2b3129d2, Actual=2b3129f2 00:06:14.394 [2024-06-09 20:48:42.544951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.546081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.547192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.394 [2024-06-09 20:48:42.548303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.394 [2024-06-09 20:48:42.549426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=1523257f 00:06:14.394 [2024-06-09 20:48:42.550555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=38a21480 00:06:14.394 [2024-06-09 20:48:42.551676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.394 [2024-06-09 20:48:42.552777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=36615c7e1eb96cd5, Actual=36615c5e1eb96cd5 00:06:14.394 [2024-06-09 20:48:42.553919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.555025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.556136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.394 [2024-06-09 20:48:42.557232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.394 [2024-06-09 20:48:42.558382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.394 [2024-06-09 20:48:42.559484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f7ecce510aa922ae 00:06:14.394 passed 00:06:14.394 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-09 20:48:42.559874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd6c, Actual=fd4c 00:06:14.394 [2024-06-09 20:48:42.560144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ca06, Actual=ca26 00:06:14.394 [2024-06-09 20:48:42.560423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.560702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.560998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.394 [2024-06-09 20:48:42.561275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:06:14.394 [2024-06-09 20:48:42.561571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=a2d0 00:06:14.394 [2024-06-09 20:48:42.561864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=a045 00:06:14.394 [2024-06-09 20:48:42.562148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753cd, Actual=1ab753ed 00:06:14.394 [2024-06-09 20:48:42.562424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2b3129d2, Actual=2b3129f2 00:06:14.394 [2024-06-09 20:48:42.562717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.562999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.394 [2024-06-09 20:48:42.563280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.394 [2024-06-09 20:48:42.563550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.654 [2024-06-09 20:48:42.563825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=1523257f 00:06:14.654 [2024-06-09 20:48:42.564100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=38a21480 00:06:14.654 [2024-06-09 20:48:42.564387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7528ecc20d3, Actual=a576a7728ecc20d3 00:06:14.654 [2024-06-09 20:48:42.564665] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=36615c7e1eb96cd5, Actual=36615c5e1eb96cd5 00:06:14.654 [2024-06-09 20:48:42.564936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.654 [2024-06-09 20:48:42.565208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:06:14.654 [2024-06-09 20:48:42.565473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.654 [2024-06-09 20:48:42.565771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000000059 00:06:14.654 [2024-06-09 20:48:42.566055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=2d13a5e95b63d66f 00:06:14.654 [2024-06-09 20:48:42.566330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=f7ecce510aa922ae 00:06:14.654 passed 00:06:14.654 Test: set_md_interleave_iovs_test ...passed 00:06:14.654 Test: set_md_interleave_iovs_split_test ...passed 00:06:14.654 Test: dif_generate_stream_pi_16_test ...passed 00:06:14.654 Test: dif_generate_stream_test ...passed 00:06:14.654 Test: set_md_interleave_iovs_alignment_test ...[2024-06-09 20:48:42.573835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:14.654 passed 00:06:14.654 Test: dif_generate_split_test ...passed 00:06:14.654 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:14.654 Test: dif_verify_split_test ...passed 00:06:14.654 Test: dif_verify_stream_multi_segments_test ...passed 00:06:14.654 Test: update_crc32c_pi_16_test ...passed 00:06:14.654 Test: update_crc32c_test ...passed 00:06:14.654 Test: dif_update_crc32c_split_test ...passed 00:06:14.654 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:14.654 Test: get_range_with_md_test ...passed 00:06:14.654 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:14.654 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:14.654 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:14.654 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:14.654 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:14.654 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:14.654 Test: dif_generate_and_verify_unmap_test ...passed 00:06:14.654 00:06:14.654 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.654 suites 1 1 n/a 0 0 00:06:14.654 tests 79 79 79 0 0 00:06:14.654 asserts 3584 3584 3584 0 n/a 00:06:14.654 00:06:14.654 Elapsed time = 0.347 seconds 00:06:14.654 20:48:42 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:14.654 00:06:14.654 00:06:14.654 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.654 http://cunit.sourceforge.net/ 00:06:14.654 00:06:14.654 00:06:14.654 Suite: iov 00:06:14.654 Test: test_single_iov ...passed 00:06:14.654 Test: test_simple_iov ...passed 00:06:14.654 Test: test_complex_iov ...passed 00:06:14.654 Test: test_iovs_to_buf ...passed 00:06:14.654 Test: test_buf_to_iovs ...passed 00:06:14.654 Test: test_memset ...passed 00:06:14.654 Test: test_iov_one ...passed 00:06:14.654 Test: test_iov_xfer ...passed 00:06:14.654 00:06:14.654 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.654 suites 1 1 n/a 0 0 00:06:14.654 tests 8 8 8 0 0 00:06:14.654 asserts 156 156 156 0 n/a 00:06:14.654 00:06:14.654 Elapsed time = 0.000 seconds 00:06:14.654 20:48:42 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:14.654 00:06:14.654 00:06:14.654 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.654 http://cunit.sourceforge.net/ 00:06:14.654 00:06:14.654 00:06:14.654 Suite: math 00:06:14.654 Test: test_serial_number_arithmetic ...passed 00:06:14.654 Suite: erase 00:06:14.654 Test: test_memset_s ...passed 00:06:14.654 00:06:14.654 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.654 suites 2 2 n/a 0 0 00:06:14.654 tests 2 2 2 0 0 00:06:14.654 asserts 18 18 18 0 n/a 00:06:14.654 00:06:14.654 Elapsed time = 0.000 seconds 00:06:14.654 20:48:42 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:14.654 00:06:14.654 00:06:14.654 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.654 http://cunit.sourceforge.net/ 00:06:14.654 00:06:14.654 00:06:14.654 Suite: pipe 00:06:14.654 Test: test_create_destroy ...passed 00:06:14.654 Test: test_write_get_buffer ...passed 00:06:14.654 Test: test_write_advance ...passed 00:06:14.654 Test: test_read_get_buffer ...passed 00:06:14.654 Test: test_read_advance ...passed 00:06:14.654 Test: test_data ...passed 00:06:14.654 00:06:14.654 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.654 suites 1 1 n/a 0 0 00:06:14.654 tests 6 6 6 0 0 00:06:14.654 asserts 250 250 250 0 n/a 00:06:14.654 00:06:14.654 Elapsed time = 0.000 seconds 00:06:14.654 20:48:42 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:14.654 00:06:14.654 00:06:14.654 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.654 http://cunit.sourceforge.net/ 00:06:14.654 00:06:14.654 00:06:14.654 Suite: xor 00:06:14.654 Test: test_xor_gen ...passed 00:06:14.654 00:06:14.654 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.654 suites 1 1 n/a 0 0 00:06:14.654 tests 1 1 1 0 0 00:06:14.654 asserts 17 17 17 0 n/a 00:06:14.654 00:06:14.654 Elapsed time = 0.007 seconds 00:06:14.654 00:06:14.654 real 0m0.731s 00:06:14.654 user 0m0.564s 00:06:14.654 sys 0m0.172s 00:06:14.654 20:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.654 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.654 ************************************ 00:06:14.654 END TEST unittest_util 00:06:14.654 ************************************ 00:06:14.654 20:48:42 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:14.654 20:48:42 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:14.654 20:48:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.654 20:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.654 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.654 ************************************ 00:06:14.654 START TEST unittest_vhost 00:06:14.654 ************************************ 00:06:14.654 20:48:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:14.913 00:06:14.913 00:06:14.913 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.913 http://cunit.sourceforge.net/ 00:06:14.913 00:06:14.913 00:06:14.913 Suite: vhost_suite 00:06:14.913 Test: desc_to_iov_test ...[2024-06-09 20:48:42.831532] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:14.913 passed 00:06:14.913 Test: create_controller_test ...[2024-06-09 20:48:42.835991] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:14.913 [2024-06-09 20:48:42.836134] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:14.913 [2024-06-09 20:48:42.836270] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:14.913 [2024-06-09 20:48:42.836388] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:14.914 [2024-06-09 20:48:42.836440] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:14.914 [2024-06-09 20:48:42.836547] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-06-09 20:48:42.837569] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:14.914 passed 00:06:14.914 Test: session_find_by_vid_test ...passed 00:06:14.914 Test: remove_controller_test ...[2024-06-09 20:48:42.839604] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:14.914 passed 00:06:14.914 Test: vq_avail_ring_get_test ...passed 00:06:14.914 Test: vq_packed_ring_test ...passed 00:06:14.914 Test: vhost_blk_construct_test ...passed 00:06:14.914 00:06:14.914 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.914 suites 1 1 n/a 0 0 00:06:14.914 tests 7 7 7 0 0 00:06:14.914 asserts 145 145 145 0 n/a 00:06:14.914 00:06:14.914 Elapsed time = 0.012 seconds 00:06:14.914 00:06:14.914 real 0m0.051s 00:06:14.914 user 0m0.024s 00:06:14.914 sys 0m0.027s 00:06:14.914 20:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.914 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.914 ************************************ 00:06:14.914 END TEST unittest_vhost 00:06:14.914 ************************************ 00:06:14.914 20:48:42 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:14.914 20:48:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.914 20:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.914 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.914 ************************************ 00:06:14.914 START TEST unittest_dma 00:06:14.914 ************************************ 00:06:14.914 20:48:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:14.914 00:06:14.914 00:06:14.914 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.914 http://cunit.sourceforge.net/ 00:06:14.914 00:06:14.914 00:06:14.914 Suite: dma_suite 00:06:14.914 Test: test_dma ...[2024-06-09 20:48:42.927067] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:14.914 passed 00:06:14.914 00:06:14.914 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.914 suites 1 1 n/a 0 0 00:06:14.914 tests 1 1 1 0 0 00:06:14.914 asserts 50 50 50 0 n/a 00:06:14.914 00:06:14.914 Elapsed time = 0.001 seconds 00:06:14.914 00:06:14.914 real 0m0.028s 00:06:14.914 user 0m0.016s 00:06:14.914 sys 0m0.012s 00:06:14.914 20:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.914 ************************************ 00:06:14.914 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.914 END TEST unittest_dma 00:06:14.914 ************************************ 00:06:14.914 20:48:42 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:06:14.914 20:48:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.914 20:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.914 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.914 ************************************ 00:06:14.914 START TEST unittest_init 00:06:14.914 ************************************ 00:06:14.914 20:48:42 -- common/autotest_common.sh@1104 -- # unittest_init 00:06:14.914 20:48:42 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:14.914 00:06:14.914 00:06:14.914 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.914 http://cunit.sourceforge.net/ 00:06:14.914 00:06:14.914 00:06:14.914 Suite: subsystem_suite 00:06:14.914 Test: subsystem_sort_test_depends_on_single ...passed 00:06:14.914 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:14.914 Test: subsystem_sort_test_missing_dependency ...[2024-06-09 20:48:43.012918] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:14.914 [2024-06-09 20:48:43.013430] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:14.914 passed 00:06:14.914 00:06:14.914 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.914 suites 1 1 n/a 0 0 00:06:14.914 tests 3 3 3 0 0 00:06:14.914 asserts 20 20 20 0 n/a 00:06:14.914 00:06:14.914 Elapsed time = 0.001 seconds 00:06:14.914 00:06:14.914 real 0m0.038s 00:06:14.914 user 0m0.031s 00:06:14.914 sys 0m0.004s 00:06:14.914 20:48:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.914 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:06:14.914 ************************************ 00:06:14.914 END TEST unittest_init 00:06:14.914 ************************************ 00:06:14.914 20:48:43 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:06:14.914 20:48:43 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:14.914 20:48:43 -- unit/unittest.sh@290 -- # hostname 00:06:14.914 20:48:43 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:15.172 geninfo: WARNING: invalid characters removed from testname! 00:06:41.727 20:49:06 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:06:42.663 20:49:10 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:45.194 20:49:13 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:48.473 20:49:16 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:51.004 20:49:18 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:53.535 20:49:21 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:56.099 20:49:24 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:58.631 20:49:26 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:58.631 20:49:26 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:59.198 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:59.198 Found 308 entries. 00:06:59.198 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:06:59.198 Writing .css and .png files. 00:06:59.198 Generating output. 00:06:59.457 Processing file include/linux/virtio_ring.h 00:06:59.715 Processing file include/spdk/histogram_data.h 00:06:59.715 Processing file include/spdk/nvme.h 00:06:59.715 Processing file include/spdk/base64.h 00:06:59.715 Processing file include/spdk/bdev_module.h 00:06:59.715 Processing file include/spdk/thread.h 00:06:59.715 Processing file include/spdk/endian.h 00:06:59.715 Processing file include/spdk/trace.h 00:06:59.715 Processing file include/spdk/mmio.h 00:06:59.715 Processing file include/spdk/nvme_spec.h 00:06:59.715 Processing file include/spdk/util.h 00:06:59.715 Processing file include/spdk/nvmf_transport.h 00:06:59.715 Processing file include/spdk_internal/rdma.h 00:06:59.715 Processing file include/spdk_internal/sock.h 00:06:59.715 Processing file include/spdk_internal/utf.h 00:06:59.715 Processing file include/spdk_internal/nvme_tcp.h 00:06:59.715 Processing file include/spdk_internal/sgl.h 00:06:59.715 Processing file include/spdk_internal/virtio.h 00:06:59.974 Processing file lib/accel/accel_rpc.c 00:06:59.974 Processing file lib/accel/accel_sw.c 00:06:59.974 Processing file lib/accel/accel.c 00:07:00.231 Processing file lib/bdev/part.c 00:07:00.231 Processing file lib/bdev/bdev_rpc.c 00:07:00.231 Processing file lib/bdev/scsi_nvme.c 00:07:00.231 Processing file lib/bdev/bdev_zone.c 00:07:00.231 Processing file lib/bdev/bdev.c 00:07:00.489 Processing file lib/blob/zeroes.c 00:07:00.489 Processing file lib/blob/blobstore.c 00:07:00.489 Processing file lib/blob/request.c 00:07:00.489 Processing file lib/blob/blobstore.h 00:07:00.489 Processing file lib/blob/blob_bs_dev.c 00:07:00.489 Processing file lib/blobfs/tree.c 00:07:00.489 Processing file lib/blobfs/blobfs.c 00:07:00.747 Processing file lib/conf/conf.c 00:07:00.747 Processing file lib/dma/dma.c 00:07:01.005 Processing file lib/env_dpdk/env.c 00:07:01.005 Processing file lib/env_dpdk/pci.c 00:07:01.005 Processing file lib/env_dpdk/memory.c 00:07:01.005 Processing file lib/env_dpdk/pci_idxd.c 00:07:01.005 Processing file lib/env_dpdk/pci_vmd.c 00:07:01.005 Processing file lib/env_dpdk/threads.c 00:07:01.005 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:01.005 Processing file lib/env_dpdk/sigbus_handler.c 00:07:01.005 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:01.005 Processing file lib/env_dpdk/pci_dpdk.c 00:07:01.005 Processing file lib/env_dpdk/pci_ioat.c 00:07:01.005 Processing file lib/env_dpdk/init.c 00:07:01.005 Processing file lib/env_dpdk/pci_event.c 00:07:01.005 Processing file lib/env_dpdk/pci_virtio.c 00:07:01.005 Processing file lib/event/log_rpc.c 00:07:01.005 Processing file lib/event/app.c 00:07:01.005 Processing file lib/event/app_rpc.c 00:07:01.005 Processing file lib/event/scheduler_static.c 00:07:01.005 Processing file lib/event/reactor.c 00:07:01.574 Processing file lib/ftl/ftl_writer.h 00:07:01.574 Processing file lib/ftl/ftl_l2p_flat.c 00:07:01.574 Processing file lib/ftl/ftl_io.c 00:07:01.574 Processing file lib/ftl/ftl_writer.c 00:07:01.574 Processing file lib/ftl/ftl_init.c 00:07:01.574 Processing file lib/ftl/ftl_l2p_cache.c 00:07:01.574 Processing file lib/ftl/ftl_io.h 00:07:01.574 Processing file lib/ftl/ftl_core.c 00:07:01.574 Processing file lib/ftl/ftl_band.h 00:07:01.574 Processing file lib/ftl/ftl_trace.c 00:07:01.574 Processing file lib/ftl/ftl_l2p.c 00:07:01.574 Processing file lib/ftl/ftl_band.c 00:07:01.574 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:01.574 Processing file lib/ftl/ftl_nv_cache.h 00:07:01.574 Processing file lib/ftl/ftl_nv_cache.c 00:07:01.574 Processing file lib/ftl/ftl_layout.c 00:07:01.574 Processing file lib/ftl/ftl_p2l.c 00:07:01.574 Processing file lib/ftl/ftl_sb.c 00:07:01.574 Processing file lib/ftl/ftl_band_ops.c 00:07:01.574 Processing file lib/ftl/ftl_debug.h 00:07:01.574 Processing file lib/ftl/ftl_debug.c 00:07:01.574 Processing file lib/ftl/ftl_rq.c 00:07:01.574 Processing file lib/ftl/ftl_reloc.c 00:07:01.574 Processing file lib/ftl/ftl_core.h 00:07:01.574 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:01.574 Processing file lib/ftl/base/ftl_base_dev.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:01.832 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:01.832 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:01.832 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:02.090 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:02.090 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:02.090 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:02.090 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:02.090 Processing file lib/ftl/utils/ftl_mempool.c 00:07:02.090 Processing file lib/ftl/utils/ftl_df.h 00:07:02.090 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:02.090 Processing file lib/ftl/utils/ftl_md.c 00:07:02.090 Processing file lib/ftl/utils/ftl_property.h 00:07:02.090 Processing file lib/ftl/utils/ftl_property.c 00:07:02.090 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:02.090 Processing file lib/ftl/utils/ftl_conf.c 00:07:02.090 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:02.348 Processing file lib/idxd/idxd.c 00:07:02.348 Processing file lib/idxd/idxd_user.c 00:07:02.348 Processing file lib/idxd/idxd_internal.h 00:07:02.348 Processing file lib/init/subsystem_rpc.c 00:07:02.348 Processing file lib/init/json_config.c 00:07:02.348 Processing file lib/init/subsystem.c 00:07:02.349 Processing file lib/init/rpc.c 00:07:02.349 Processing file lib/ioat/ioat_internal.h 00:07:02.349 Processing file lib/ioat/ioat.c 00:07:02.916 Processing file lib/iscsi/task.h 00:07:02.916 Processing file lib/iscsi/init_grp.c 00:07:02.916 Processing file lib/iscsi/iscsi_rpc.c 00:07:02.916 Processing file lib/iscsi/iscsi.h 00:07:02.916 Processing file lib/iscsi/iscsi.c 00:07:02.916 Processing file lib/iscsi/conn.c 00:07:02.916 Processing file lib/iscsi/param.c 00:07:02.916 Processing file lib/iscsi/md5.c 00:07:02.916 Processing file lib/iscsi/portal_grp.c 00:07:02.916 Processing file lib/iscsi/tgt_node.c 00:07:02.916 Processing file lib/iscsi/iscsi_subsystem.c 00:07:02.916 Processing file lib/iscsi/task.c 00:07:02.916 Processing file lib/json/json_write.c 00:07:02.916 Processing file lib/json/json_util.c 00:07:02.916 Processing file lib/json/json_parse.c 00:07:02.916 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:02.916 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:02.916 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:02.916 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:03.175 Processing file lib/log/log_flags.c 00:07:03.175 Processing file lib/log/log_deprecated.c 00:07:03.175 Processing file lib/log/log.c 00:07:03.175 Processing file lib/lvol/lvol.c 00:07:03.433 Processing file lib/nbd/nbd.c 00:07:03.433 Processing file lib/nbd/nbd_rpc.c 00:07:03.433 Processing file lib/notify/notify.c 00:07:03.433 Processing file lib/notify/notify_rpc.c 00:07:04.001 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:04.001 Processing file lib/nvme/nvme_internal.h 00:07:04.001 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:04.001 Processing file lib/nvme/nvme_fabric.c 00:07:04.001 Processing file lib/nvme/nvme_ctrlr.c 00:07:04.001 Processing file lib/nvme/nvme_opal.c 00:07:04.001 Processing file lib/nvme/nvme_quirks.c 00:07:04.001 Processing file lib/nvme/nvme_rdma.c 00:07:04.001 Processing file lib/nvme/nvme.c 00:07:04.001 Processing file lib/nvme/nvme_discovery.c 00:07:04.001 Processing file lib/nvme/nvme_tcp.c 00:07:04.001 Processing file lib/nvme/nvme_pcie_internal.h 00:07:04.001 Processing file lib/nvme/nvme_ns.c 00:07:04.001 Processing file lib/nvme/nvme_pcie_common.c 00:07:04.001 Processing file lib/nvme/nvme_io_msg.c 00:07:04.001 Processing file lib/nvme/nvme_pcie.c 00:07:04.001 Processing file lib/nvme/nvme_poll_group.c 00:07:04.001 Processing file lib/nvme/nvme_qpair.c 00:07:04.001 Processing file lib/nvme/nvme_ns_cmd.c 00:07:04.001 Processing file lib/nvme/nvme_transport.c 00:07:04.001 Processing file lib/nvme/nvme_vfio_user.c 00:07:04.001 Processing file lib/nvme/nvme_zns.c 00:07:04.001 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:04.001 Processing file lib/nvme/nvme_cuse.c 00:07:04.569 Processing file lib/nvmf/nvmf.c 00:07:04.569 Processing file lib/nvmf/transport.c 00:07:04.569 Processing file lib/nvmf/subsystem.c 00:07:04.569 Processing file lib/nvmf/ctrlr.c 00:07:04.569 Processing file lib/nvmf/ctrlr_discovery.c 00:07:04.569 Processing file lib/nvmf/ctrlr_bdev.c 00:07:04.569 Processing file lib/nvmf/rdma.c 00:07:04.569 Processing file lib/nvmf/nvmf_internal.h 00:07:04.569 Processing file lib/nvmf/nvmf_rpc.c 00:07:04.569 Processing file lib/nvmf/tcp.c 00:07:04.828 Processing file lib/rdma/common.c 00:07:04.828 Processing file lib/rdma/rdma_verbs.c 00:07:04.828 Processing file lib/rpc/rpc.c 00:07:05.087 Processing file lib/scsi/task.c 00:07:05.087 Processing file lib/scsi/port.c 00:07:05.087 Processing file lib/scsi/lun.c 00:07:05.087 Processing file lib/scsi/scsi_bdev.c 00:07:05.087 Processing file lib/scsi/scsi.c 00:07:05.087 Processing file lib/scsi/dev.c 00:07:05.087 Processing file lib/scsi/scsi_pr.c 00:07:05.087 Processing file lib/scsi/scsi_rpc.c 00:07:05.087 Processing file lib/sock/sock.c 00:07:05.087 Processing file lib/sock/sock_rpc.c 00:07:05.087 Processing file lib/thread/iobuf.c 00:07:05.087 Processing file lib/thread/thread.c 00:07:05.347 Processing file lib/trace/trace.c 00:07:05.347 Processing file lib/trace/trace_rpc.c 00:07:05.347 Processing file lib/trace/trace_flags.c 00:07:05.347 Processing file lib/trace_parser/trace.cpp 00:07:05.347 Processing file lib/ut/ut.c 00:07:05.347 Processing file lib/ut_mock/mock.c 00:07:05.916 Processing file lib/util/math.c 00:07:05.916 Processing file lib/util/uuid.c 00:07:05.916 Processing file lib/util/crc32.c 00:07:05.916 Processing file lib/util/zipf.c 00:07:05.916 Processing file lib/util/crc64.c 00:07:05.916 Processing file lib/util/xor.c 00:07:05.916 Processing file lib/util/fd_group.c 00:07:05.916 Processing file lib/util/hexlify.c 00:07:05.916 Processing file lib/util/strerror_tls.c 00:07:05.916 Processing file lib/util/dif.c 00:07:05.916 Processing file lib/util/bit_array.c 00:07:05.916 Processing file lib/util/fd.c 00:07:05.916 Processing file lib/util/file.c 00:07:05.916 Processing file lib/util/cpuset.c 00:07:05.916 Processing file lib/util/crc16.c 00:07:05.916 Processing file lib/util/iov.c 00:07:05.916 Processing file lib/util/pipe.c 00:07:05.916 Processing file lib/util/crc32_ieee.c 00:07:05.916 Processing file lib/util/base64.c 00:07:05.916 Processing file lib/util/crc32c.c 00:07:05.916 Processing file lib/util/string.c 00:07:05.916 Processing file lib/vfio_user/host/vfio_user.c 00:07:05.916 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:06.175 Processing file lib/vhost/rte_vhost_user.c 00:07:06.175 Processing file lib/vhost/vhost_rpc.c 00:07:06.175 Processing file lib/vhost/vhost.c 00:07:06.175 Processing file lib/vhost/vhost_internal.h 00:07:06.175 Processing file lib/vhost/vhost_blk.c 00:07:06.175 Processing file lib/vhost/vhost_scsi.c 00:07:06.175 Processing file lib/virtio/virtio_vfio_user.c 00:07:06.175 Processing file lib/virtio/virtio_vhost_user.c 00:07:06.175 Processing file lib/virtio/virtio.c 00:07:06.175 Processing file lib/virtio/virtio_pci.c 00:07:06.435 Processing file lib/vmd/vmd.c 00:07:06.435 Processing file lib/vmd/led.c 00:07:06.435 Processing file module/accel/dsa/accel_dsa.c 00:07:06.435 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:06.435 Processing file module/accel/error/accel_error_rpc.c 00:07:06.435 Processing file module/accel/error/accel_error.c 00:07:06.695 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:06.695 Processing file module/accel/iaa/accel_iaa.c 00:07:06.695 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:06.695 Processing file module/accel/ioat/accel_ioat.c 00:07:06.695 Processing file module/bdev/aio/bdev_aio.c 00:07:06.695 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:06.695 Processing file module/bdev/delay/vbdev_delay.c 00:07:06.695 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:06.954 Processing file module/bdev/error/vbdev_error.c 00:07:06.954 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:06.954 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:06.954 Processing file module/bdev/ftl/bdev_ftl.c 00:07:06.954 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:06.954 Processing file module/bdev/gpt/gpt.h 00:07:06.954 Processing file module/bdev/gpt/gpt.c 00:07:07.213 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:07.213 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:07.213 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:07.213 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:07.213 Processing file module/bdev/malloc/bdev_malloc.c 00:07:07.213 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:07.213 Processing file module/bdev/null/bdev_null.c 00:07:07.213 Processing file module/bdev/null/bdev_null_rpc.c 00:07:07.472 Processing file module/bdev/nvme/nvme_rpc.c 00:07:07.472 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:07.472 Processing file module/bdev/nvme/vbdev_opal.c 00:07:07.472 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:07.472 Processing file module/bdev/nvme/bdev_nvme.c 00:07:07.472 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:07.473 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:07.731 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:07.732 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:07.732 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:07.732 Processing file module/bdev/raid/raid1.c 00:07:07.732 Processing file module/bdev/raid/bdev_raid.c 00:07:07.732 Processing file module/bdev/raid/bdev_raid.h 00:07:07.732 Processing file module/bdev/raid/raid0.c 00:07:07.732 Processing file module/bdev/raid/concat.c 00:07:07.732 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:07.991 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:07.991 Processing file module/bdev/split/vbdev_split.c 00:07:07.991 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:07.991 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:07.991 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:07.991 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:07.991 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:08.250 Processing file module/blob/bdev/blob_bdev.c 00:07:08.250 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:08.250 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:08.250 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:08.509 Processing file module/event/subsystems/accel/accel.c 00:07:08.509 Processing file module/event/subsystems/bdev/bdev.c 00:07:08.509 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:08.509 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:08.509 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:08.768 Processing file module/event/subsystems/nbd/nbd.c 00:07:08.768 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:08.768 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:08.768 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:08.768 Processing file module/event/subsystems/scsi/scsi.c 00:07:09.027 Processing file module/event/subsystems/sock/sock.c 00:07:09.027 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:09.027 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:09.027 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:09.027 Processing file module/event/subsystems/vmd/vmd.c 00:07:09.285 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:09.285 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:09.285 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:09.285 Processing file module/sock/sock_kernel.h 00:07:09.544 Processing file module/sock/posix/posix.c 00:07:09.544 Writing directory view page. 00:07:09.544 Overall coverage rate: 00:07:09.544 lines......: 38.9% (38783 of 99805 lines) 00:07:09.544 functions..: 42.5% (3546 of 8335 functions) 00:07:09.544 00:07:09.544 00:07:09.544 ===================== 00:07:09.544 All unit tests passed 00:07:09.544 ===================== 00:07:09.544 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:09.544 20:49:37 -- unit/unittest.sh@302 -- # set +x 00:07:09.544 00:07:09.544 00:07:09.544 ************************************ 00:07:09.544 END TEST unittest 00:07:09.544 ************************************ 00:07:09.544 00:07:09.544 real 1m59.787s 00:07:09.544 user 1m38.476s 00:07:09.544 sys 0m11.886s 00:07:09.544 20:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.544 20:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.544 20:49:37 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:09.544 20:49:37 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:09.544 20:49:37 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:09.544 20:49:37 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:09.544 20:49:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:09.544 20:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.544 20:49:37 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:09.544 20:49:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.544 20:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.544 20:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.544 ************************************ 00:07:09.544 START TEST env 00:07:09.544 ************************************ 00:07:09.544 20:49:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:09.544 * Looking for test storage... 00:07:09.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:09.544 20:49:37 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:09.544 20:49:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.544 20:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.545 20:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.804 ************************************ 00:07:09.804 START TEST env_memory 00:07:09.804 ************************************ 00:07:09.804 20:49:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:09.804 00:07:09.804 00:07:09.804 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.804 http://cunit.sourceforge.net/ 00:07:09.804 00:07:09.804 00:07:09.804 Suite: memory 00:07:09.804 Test: alloc and free memory map ...[2024-06-09 20:49:37.785361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:09.804 passed 00:07:09.804 Test: mem map translation ...[2024-06-09 20:49:37.836440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:09.804 [2024-06-09 20:49:37.836862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:09.804 [2024-06-09 20:49:37.837153] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:09.804 [2024-06-09 20:49:37.837383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:09.804 passed 00:07:09.804 Test: mem map registration ...[2024-06-09 20:49:37.926012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:09.804 [2024-06-09 20:49:37.926381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:09.804 passed 00:07:10.063 Test: mem map adjacent registrations ...passed 00:07:10.063 00:07:10.063 Run Summary: Type Total Ran Passed Failed Inactive 00:07:10.063 suites 1 1 n/a 0 0 00:07:10.063 tests 4 4 4 0 0 00:07:10.063 asserts 152 152 152 0 n/a 00:07:10.063 00:07:10.063 Elapsed time = 0.289 seconds 00:07:10.063 00:07:10.063 real 0m0.318s 00:07:10.063 user 0m0.304s 00:07:10.063 sys 0m0.012s 00:07:10.063 20:49:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.063 ************************************ 00:07:10.063 20:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:10.063 END TEST env_memory 00:07:10.063 ************************************ 00:07:10.063 20:49:38 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:10.063 20:49:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.063 20:49:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.063 20:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:10.063 ************************************ 00:07:10.063 START TEST env_vtophys 00:07:10.063 ************************************ 00:07:10.063 20:49:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:10.063 EAL: lib.eal log level changed from notice to debug 00:07:10.063 EAL: Detected lcore 0 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 1 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 2 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 3 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 4 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 5 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 6 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 7 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 8 as core 0 on socket 0 00:07:10.063 EAL: Detected lcore 9 as core 0 on socket 0 00:07:10.063 EAL: Maximum logical cores by configuration: 128 00:07:10.063 EAL: Detected CPU lcores: 10 00:07:10.063 EAL: Detected NUMA nodes: 1 00:07:10.063 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:10.063 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:10.063 EAL: Checking presence of .so 'librte_eal.so' 00:07:10.063 EAL: Detected static linkage of DPDK 00:07:10.063 EAL: No shared files mode enabled, IPC will be disabled 00:07:10.063 EAL: Selected IOVA mode 'PA' 00:07:10.063 EAL: Probing VFIO support... 00:07:10.063 EAL: IOMMU type 1 (Type 1) is supported 00:07:10.063 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:10.063 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:10.063 EAL: VFIO support initialized 00:07:10.063 EAL: Ask a virtual area of 0x2e000 bytes 00:07:10.063 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:10.063 EAL: Setting up physically contiguous memory... 00:07:10.063 EAL: Setting maximum number of open files to 1048576 00:07:10.063 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:10.063 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:10.063 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.063 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:10.063 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.063 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.063 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:10.063 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:10.063 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.063 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:10.063 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.063 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.063 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:10.063 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:10.063 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.063 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:10.063 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.063 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.063 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:10.063 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:10.063 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.063 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:10.063 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.063 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.063 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:10.063 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:10.063 EAL: Hugepages will be freed exactly as allocated. 00:07:10.063 EAL: No shared files mode enabled, IPC is disabled 00:07:10.063 EAL: No shared files mode enabled, IPC is disabled 00:07:10.322 EAL: TSC frequency is ~2200000 KHz 00:07:10.322 EAL: Main lcore 0 is ready (tid=7fe9da1b3a80;cpuset=[0]) 00:07:10.322 EAL: Trying to obtain current memory policy. 00:07:10.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.322 EAL: Restoring previous memory policy: 0 00:07:10.322 EAL: request: mp_malloc_sync 00:07:10.322 EAL: No shared files mode enabled, IPC is disabled 00:07:10.322 EAL: Heap on socket 0 was expanded by 2MB 00:07:10.322 EAL: No shared files mode enabled, IPC is disabled 00:07:10.322 EAL: Mem event callback 'spdk:(nil)' registered 00:07:10.322 00:07:10.323 00:07:10.323 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.323 http://cunit.sourceforge.net/ 00:07:10.323 00:07:10.323 00:07:10.323 Suite: components_suite 00:07:10.890 Test: vtophys_malloc_test ...passed 00:07:10.890 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:10.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.890 EAL: Restoring previous memory policy: 0 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was expanded by 4MB 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was shrunk by 4MB 00:07:10.890 EAL: Trying to obtain current memory policy. 00:07:10.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.890 EAL: Restoring previous memory policy: 0 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was expanded by 6MB 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was shrunk by 6MB 00:07:10.890 EAL: Trying to obtain current memory policy. 00:07:10.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.890 EAL: Restoring previous memory policy: 0 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was expanded by 10MB 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was shrunk by 10MB 00:07:10.890 EAL: Trying to obtain current memory policy. 00:07:10.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.890 EAL: Restoring previous memory policy: 0 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was expanded by 18MB 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was shrunk by 18MB 00:07:10.890 EAL: Trying to obtain current memory policy. 00:07:10.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.890 EAL: Restoring previous memory policy: 0 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was expanded by 34MB 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was shrunk by 34MB 00:07:10.890 EAL: Trying to obtain current memory policy. 00:07:10.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.890 EAL: Restoring previous memory policy: 0 00:07:10.890 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.890 EAL: request: mp_malloc_sync 00:07:10.890 EAL: No shared files mode enabled, IPC is disabled 00:07:10.890 EAL: Heap on socket 0 was expanded by 66MB 00:07:11.149 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.149 EAL: request: mp_malloc_sync 00:07:11.149 EAL: No shared files mode enabled, IPC is disabled 00:07:11.149 EAL: Heap on socket 0 was shrunk by 66MB 00:07:11.149 EAL: Trying to obtain current memory policy. 00:07:11.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.149 EAL: Restoring previous memory policy: 0 00:07:11.149 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.149 EAL: request: mp_malloc_sync 00:07:11.149 EAL: No shared files mode enabled, IPC is disabled 00:07:11.149 EAL: Heap on socket 0 was expanded by 130MB 00:07:11.407 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.407 EAL: request: mp_malloc_sync 00:07:11.407 EAL: No shared files mode enabled, IPC is disabled 00:07:11.407 EAL: Heap on socket 0 was shrunk by 130MB 00:07:11.407 EAL: Trying to obtain current memory policy. 00:07:11.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.665 EAL: Restoring previous memory policy: 0 00:07:11.665 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.665 EAL: request: mp_malloc_sync 00:07:11.665 EAL: No shared files mode enabled, IPC is disabled 00:07:11.665 EAL: Heap on socket 0 was expanded by 258MB 00:07:11.930 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.930 EAL: request: mp_malloc_sync 00:07:11.930 EAL: No shared files mode enabled, IPC is disabled 00:07:11.930 EAL: Heap on socket 0 was shrunk by 258MB 00:07:12.203 EAL: Trying to obtain current memory policy. 00:07:12.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.462 EAL: Restoring previous memory policy: 0 00:07:12.462 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.462 EAL: request: mp_malloc_sync 00:07:12.462 EAL: No shared files mode enabled, IPC is disabled 00:07:12.462 EAL: Heap on socket 0 was expanded by 514MB 00:07:13.397 EAL: Calling mem event callback 'spdk:(nil)' 00:07:13.397 EAL: request: mp_malloc_sync 00:07:13.397 EAL: No shared files mode enabled, IPC is disabled 00:07:13.397 EAL: Heap on socket 0 was shrunk by 514MB 00:07:13.964 EAL: Trying to obtain current memory policy. 00:07:13.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.223 EAL: Restoring previous memory policy: 0 00:07:14.223 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.223 EAL: request: mp_malloc_sync 00:07:14.223 EAL: No shared files mode enabled, IPC is disabled 00:07:14.223 EAL: Heap on socket 0 was expanded by 1026MB 00:07:15.595 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.852 EAL: request: mp_malloc_sync 00:07:15.852 EAL: No shared files mode enabled, IPC is disabled 00:07:15.852 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:17.227 passed 00:07:17.227 00:07:17.227 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.227 suites 1 1 n/a 0 0 00:07:17.227 tests 2 2 2 0 0 00:07:17.227 asserts 6363 6363 6363 0 n/a 00:07:17.227 00:07:17.227 Elapsed time = 6.652 seconds 00:07:17.227 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.227 EAL: request: mp_malloc_sync 00:07:17.227 EAL: No shared files mode enabled, IPC is disabled 00:07:17.227 EAL: Heap on socket 0 was shrunk by 2MB 00:07:17.227 EAL: No shared files mode enabled, IPC is disabled 00:07:17.227 EAL: No shared files mode enabled, IPC is disabled 00:07:17.227 EAL: No shared files mode enabled, IPC is disabled 00:07:17.227 00:07:17.227 real 0m6.954s 00:07:17.228 user 0m5.788s 00:07:17.228 sys 0m1.037s 00:07:17.228 20:49:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.228 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.228 ************************************ 00:07:17.228 END TEST env_vtophys 00:07:17.228 ************************************ 00:07:17.228 20:49:45 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:17.228 20:49:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.228 20:49:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.228 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.228 ************************************ 00:07:17.228 START TEST env_pci 00:07:17.228 ************************************ 00:07:17.228 20:49:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:17.228 00:07:17.228 00:07:17.228 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.228 http://cunit.sourceforge.net/ 00:07:17.228 00:07:17.228 00:07:17.228 Suite: pci 00:07:17.228 Test: pci_hook ...[2024-06-09 20:49:45.153841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 102268 has claimed it 00:07:17.228 passed 00:07:17.228 00:07:17.228 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.228 suites 1 1 n/a 0 0 00:07:17.228 tests 1 1 1 0 0 00:07:17.228 asserts 25 25 25 0 n/a 00:07:17.228 00:07:17.228 Elapsed time = 0.005 seconds 00:07:17.228 EAL: Cannot find device (10000:00:01.0) 00:07:17.228 EAL: Failed to attach device on primary process 00:07:17.228 00:07:17.228 real 0m0.093s 00:07:17.228 user 0m0.058s 00:07:17.228 sys 0m0.035s 00:07:17.228 20:49:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.228 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.228 ************************************ 00:07:17.228 END TEST env_pci 00:07:17.228 ************************************ 00:07:17.228 20:49:45 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:17.228 20:49:45 -- env/env.sh@15 -- # uname 00:07:17.228 20:49:45 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:17.228 20:49:45 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:17.228 20:49:45 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:17.228 20:49:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:17.228 20:49:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.228 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.228 ************************************ 00:07:17.228 START TEST env_dpdk_post_init 00:07:17.228 ************************************ 00:07:17.228 20:49:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:17.228 EAL: Detected CPU lcores: 10 00:07:17.228 EAL: Detected NUMA nodes: 1 00:07:17.228 EAL: Detected static linkage of DPDK 00:07:17.228 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:17.228 EAL: Selected IOVA mode 'PA' 00:07:17.228 EAL: VFIO support initialized 00:07:17.487 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:17.487 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:07:17.487 Starting DPDK initialization... 00:07:17.487 Starting SPDK post initialization... 00:07:17.487 SPDK NVMe probe 00:07:17.487 Attaching to 0000:00:06.0 00:07:17.487 Attached to 0000:00:06.0 00:07:17.487 Cleaning up... 00:07:17.487 00:07:17.487 real 0m0.273s 00:07:17.487 user 0m0.083s 00:07:17.487 sys 0m0.095s 00:07:17.487 20:49:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.487 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.487 ************************************ 00:07:17.487 END TEST env_dpdk_post_init 00:07:17.487 ************************************ 00:07:17.487 20:49:45 -- env/env.sh@26 -- # uname 00:07:17.487 20:49:45 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:17.487 20:49:45 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:17.487 20:49:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.487 20:49:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.487 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.487 ************************************ 00:07:17.487 START TEST env_mem_callbacks 00:07:17.487 ************************************ 00:07:17.487 20:49:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:17.487 EAL: Detected CPU lcores: 10 00:07:17.487 EAL: Detected NUMA nodes: 1 00:07:17.487 EAL: Detected static linkage of DPDK 00:07:17.746 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:17.746 EAL: Selected IOVA mode 'PA' 00:07:17.746 EAL: VFIO support initialized 00:07:17.746 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:17.746 00:07:17.746 00:07:17.746 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.746 http://cunit.sourceforge.net/ 00:07:17.746 00:07:17.746 00:07:17.746 Suite: memory 00:07:17.746 Test: test ... 00:07:17.746 register 0x200000200000 2097152 00:07:17.746 malloc 3145728 00:07:17.746 register 0x200000400000 4194304 00:07:17.746 buf 0x2000004fffc0 len 3145728 PASSED 00:07:17.746 malloc 64 00:07:17.746 buf 0x2000004ffec0 len 64 PASSED 00:07:17.746 malloc 4194304 00:07:17.746 register 0x200000800000 6291456 00:07:17.746 buf 0x2000009fffc0 len 4194304 PASSED 00:07:17.746 free 0x2000004fffc0 3145728 00:07:17.746 free 0x2000004ffec0 64 00:07:17.746 unregister 0x200000400000 4194304 PASSED 00:07:17.746 free 0x2000009fffc0 4194304 00:07:17.746 unregister 0x200000800000 6291456 PASSED 00:07:17.746 malloc 8388608 00:07:17.746 register 0x200000400000 10485760 00:07:17.746 buf 0x2000005fffc0 len 8388608 PASSED 00:07:17.746 free 0x2000005fffc0 8388608 00:07:17.746 unregister 0x200000400000 10485760 PASSED 00:07:17.746 passed 00:07:17.746 00:07:17.746 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.746 suites 1 1 n/a 0 0 00:07:17.746 tests 1 1 1 0 0 00:07:17.746 asserts 15 15 15 0 n/a 00:07:17.746 00:07:17.746 Elapsed time = 0.050 seconds 00:07:17.746 00:07:17.746 real 0m0.280s 00:07:17.746 user 0m0.115s 00:07:17.746 sys 0m0.065s 00:07:17.746 20:49:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.746 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.746 ************************************ 00:07:17.746 END TEST env_mem_callbacks 00:07:17.746 ************************************ 00:07:17.746 ************************************ 00:07:17.746 END TEST env 00:07:17.746 ************************************ 00:07:17.746 00:07:17.746 real 0m8.272s 00:07:17.746 user 0m6.569s 00:07:17.746 sys 0m1.373s 00:07:17.746 20:49:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.746 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:18.004 20:49:45 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:18.004 20:49:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.004 20:49:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.004 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:18.004 ************************************ 00:07:18.004 START TEST rpc 00:07:18.004 ************************************ 00:07:18.004 20:49:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:18.004 * Looking for test storage... 00:07:18.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:18.005 20:49:46 -- rpc/rpc.sh@65 -- # spdk_pid=102398 00:07:18.005 20:49:46 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.005 20:49:46 -- rpc/rpc.sh@67 -- # waitforlisten 102398 00:07:18.005 20:49:46 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:18.005 20:49:46 -- common/autotest_common.sh@819 -- # '[' -z 102398 ']' 00:07:18.005 20:49:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.005 20:49:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.005 20:49:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.005 20:49:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.005 20:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:18.005 [2024-06-09 20:49:46.125609] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:18.005 [2024-06-09 20:49:46.125820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102398 ] 00:07:18.263 [2024-06-09 20:49:46.292435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.522 [2024-06-09 20:49:46.480921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.522 [2024-06-09 20:49:46.481167] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:18.522 [2024-06-09 20:49:46.481203] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 102398' to capture a snapshot of events at runtime. 00:07:18.522 [2024-06-09 20:49:46.481227] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid102398 for offline analysis/debug. 00:07:18.522 [2024-06-09 20:49:46.481341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.918 20:49:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.918 20:49:47 -- common/autotest_common.sh@852 -- # return 0 00:07:19.918 20:49:47 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:19.918 20:49:47 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:19.918 20:49:47 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:19.918 20:49:47 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:19.918 20:49:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.918 20:49:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.918 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:19.918 ************************************ 00:07:19.918 START TEST rpc_integrity 00:07:19.918 ************************************ 00:07:19.918 20:49:47 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:19.918 20:49:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:19.918 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.918 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:19.918 20:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.918 20:49:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:19.918 20:49:47 -- rpc/rpc.sh@13 -- # jq length 00:07:19.918 20:49:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:19.918 20:49:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:19.918 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.918 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:19.918 20:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.918 20:49:47 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:19.918 20:49:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:19.918 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.918 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:19.918 20:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.918 20:49:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:19.918 { 00:07:19.918 "name": "Malloc0", 00:07:19.918 "aliases": [ 00:07:19.918 "55ecbd8c-dc0a-4465-a8c0-e36dc753fafc" 00:07:19.918 ], 00:07:19.918 "product_name": "Malloc disk", 00:07:19.918 "block_size": 512, 00:07:19.918 "num_blocks": 16384, 00:07:19.918 "uuid": "55ecbd8c-dc0a-4465-a8c0-e36dc753fafc", 00:07:19.918 "assigned_rate_limits": { 00:07:19.918 "rw_ios_per_sec": 0, 00:07:19.918 "rw_mbytes_per_sec": 0, 00:07:19.918 "r_mbytes_per_sec": 0, 00:07:19.918 "w_mbytes_per_sec": 0 00:07:19.918 }, 00:07:19.918 "claimed": false, 00:07:19.918 "zoned": false, 00:07:19.918 "supported_io_types": { 00:07:19.918 "read": true, 00:07:19.918 "write": true, 00:07:19.918 "unmap": true, 00:07:19.918 "write_zeroes": true, 00:07:19.918 "flush": true, 00:07:19.918 "reset": true, 00:07:19.918 "compare": false, 00:07:19.918 "compare_and_write": false, 00:07:19.918 "abort": true, 00:07:19.918 "nvme_admin": false, 00:07:19.918 "nvme_io": false 00:07:19.918 }, 00:07:19.918 "memory_domains": [ 00:07:19.918 { 00:07:19.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.918 "dma_device_type": 2 00:07:19.918 } 00:07:19.918 ], 00:07:19.918 "driver_specific": {} 00:07:19.918 } 00:07:19.918 ]' 00:07:19.918 20:49:47 -- rpc/rpc.sh@17 -- # jq length 00:07:19.918 20:49:47 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:19.918 20:49:47 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:19.918 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.918 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:19.918 [2024-06-09 20:49:47.925240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:19.918 [2024-06-09 20:49:47.925351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.918 [2024-06-09 20:49:47.925396] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:07:19.918 [2024-06-09 20:49:47.925421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.918 [2024-06-09 20:49:47.927957] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.918 [2024-06-09 20:49:47.928032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:19.918 Passthru0 00:07:19.918 20:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.918 20:49:47 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:19.918 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.918 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:19.918 20:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.918 20:49:47 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:19.918 { 00:07:19.918 "name": "Malloc0", 00:07:19.918 "aliases": [ 00:07:19.918 "55ecbd8c-dc0a-4465-a8c0-e36dc753fafc" 00:07:19.918 ], 00:07:19.918 "product_name": "Malloc disk", 00:07:19.918 "block_size": 512, 00:07:19.918 "num_blocks": 16384, 00:07:19.918 "uuid": "55ecbd8c-dc0a-4465-a8c0-e36dc753fafc", 00:07:19.918 "assigned_rate_limits": { 00:07:19.918 "rw_ios_per_sec": 0, 00:07:19.918 "rw_mbytes_per_sec": 0, 00:07:19.918 "r_mbytes_per_sec": 0, 00:07:19.918 "w_mbytes_per_sec": 0 00:07:19.918 }, 00:07:19.918 "claimed": true, 00:07:19.918 "claim_type": "exclusive_write", 00:07:19.918 "zoned": false, 00:07:19.918 "supported_io_types": { 00:07:19.918 "read": true, 00:07:19.918 "write": true, 00:07:19.918 "unmap": true, 00:07:19.918 "write_zeroes": true, 00:07:19.918 "flush": true, 00:07:19.918 "reset": true, 00:07:19.918 "compare": false, 00:07:19.918 "compare_and_write": false, 00:07:19.918 "abort": true, 00:07:19.918 "nvme_admin": false, 00:07:19.918 "nvme_io": false 00:07:19.918 }, 00:07:19.918 "memory_domains": [ 00:07:19.918 { 00:07:19.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.918 "dma_device_type": 2 00:07:19.918 } 00:07:19.918 ], 00:07:19.918 "driver_specific": {} 00:07:19.918 }, 00:07:19.919 { 00:07:19.919 "name": "Passthru0", 00:07:19.919 "aliases": [ 00:07:19.919 "b68c3d53-17f7-512c-a27b-dcbe98698bc2" 00:07:19.919 ], 00:07:19.919 "product_name": "passthru", 00:07:19.919 "block_size": 512, 00:07:19.919 "num_blocks": 16384, 00:07:19.919 "uuid": "b68c3d53-17f7-512c-a27b-dcbe98698bc2", 00:07:19.919 "assigned_rate_limits": { 00:07:19.919 "rw_ios_per_sec": 0, 00:07:19.919 "rw_mbytes_per_sec": 0, 00:07:19.919 "r_mbytes_per_sec": 0, 00:07:19.919 "w_mbytes_per_sec": 0 00:07:19.919 }, 00:07:19.919 "claimed": false, 00:07:19.919 "zoned": false, 00:07:19.919 "supported_io_types": { 00:07:19.919 "read": true, 00:07:19.919 "write": true, 00:07:19.919 "unmap": true, 00:07:19.919 "write_zeroes": true, 00:07:19.919 "flush": true, 00:07:19.919 "reset": true, 00:07:19.919 "compare": false, 00:07:19.919 "compare_and_write": false, 00:07:19.919 "abort": true, 00:07:19.919 "nvme_admin": false, 00:07:19.919 "nvme_io": false 00:07:19.919 }, 00:07:19.919 "memory_domains": [ 00:07:19.919 { 00:07:19.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.919 "dma_device_type": 2 00:07:19.919 } 00:07:19.919 ], 00:07:19.919 "driver_specific": { 00:07:19.919 "passthru": { 00:07:19.919 "name": "Passthru0", 00:07:19.919 "base_bdev_name": "Malloc0" 00:07:19.919 } 00:07:19.919 } 00:07:19.919 } 00:07:19.919 ]' 00:07:19.919 20:49:47 -- rpc/rpc.sh@21 -- # jq length 00:07:19.919 20:49:47 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:19.919 20:49:47 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:19.919 20:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.919 20:49:47 -- common/autotest_common.sh@10 -- # set +x 00:07:19.919 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.919 20:49:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:19.919 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.919 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:19.919 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.919 20:49:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:19.919 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:19.919 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:19.919 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:19.919 20:49:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:19.919 20:49:48 -- rpc/rpc.sh@26 -- # jq length 00:07:20.178 20:49:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:20.178 00:07:20.178 real 0m0.312s 00:07:20.178 user 0m0.203s 00:07:20.178 sys 0m0.024s 00:07:20.178 20:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.178 ************************************ 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 END TEST rpc_integrity 00:07:20.178 ************************************ 00:07:20.178 20:49:48 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:20.178 20:49:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.178 20:49:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 ************************************ 00:07:20.178 START TEST rpc_plugins 00:07:20.178 ************************************ 00:07:20.178 20:49:48 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:07:20.178 20:49:48 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:20.178 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.178 20:49:48 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:20.178 20:49:48 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:20.178 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.178 20:49:48 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:20.178 { 00:07:20.178 "name": "Malloc1", 00:07:20.178 "aliases": [ 00:07:20.178 "3efb8a72-ee36-4e27-b765-cd9054341d5f" 00:07:20.178 ], 00:07:20.178 "product_name": "Malloc disk", 00:07:20.178 "block_size": 4096, 00:07:20.178 "num_blocks": 256, 00:07:20.178 "uuid": "3efb8a72-ee36-4e27-b765-cd9054341d5f", 00:07:20.178 "assigned_rate_limits": { 00:07:20.178 "rw_ios_per_sec": 0, 00:07:20.178 "rw_mbytes_per_sec": 0, 00:07:20.178 "r_mbytes_per_sec": 0, 00:07:20.178 "w_mbytes_per_sec": 0 00:07:20.178 }, 00:07:20.178 "claimed": false, 00:07:20.178 "zoned": false, 00:07:20.178 "supported_io_types": { 00:07:20.178 "read": true, 00:07:20.178 "write": true, 00:07:20.178 "unmap": true, 00:07:20.178 "write_zeroes": true, 00:07:20.178 "flush": true, 00:07:20.178 "reset": true, 00:07:20.178 "compare": false, 00:07:20.178 "compare_and_write": false, 00:07:20.178 "abort": true, 00:07:20.178 "nvme_admin": false, 00:07:20.178 "nvme_io": false 00:07:20.178 }, 00:07:20.178 "memory_domains": [ 00:07:20.178 { 00:07:20.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.178 "dma_device_type": 2 00:07:20.178 } 00:07:20.178 ], 00:07:20.178 "driver_specific": {} 00:07:20.178 } 00:07:20.178 ]' 00:07:20.178 20:49:48 -- rpc/rpc.sh@32 -- # jq length 00:07:20.178 20:49:48 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:20.178 20:49:48 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:20.178 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.178 20:49:48 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:20.178 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.178 20:49:48 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:20.178 20:49:48 -- rpc/rpc.sh@36 -- # jq length 00:07:20.178 20:49:48 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:20.178 00:07:20.178 real 0m0.145s 00:07:20.178 user 0m0.103s 00:07:20.178 sys 0m0.009s 00:07:20.178 20:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 ************************************ 00:07:20.178 END TEST rpc_plugins 00:07:20.178 ************************************ 00:07:20.178 20:49:48 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:20.178 20:49:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.178 20:49:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.178 ************************************ 00:07:20.178 START TEST rpc_trace_cmd_test 00:07:20.178 ************************************ 00:07:20.178 20:49:48 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:07:20.178 20:49:48 -- rpc/rpc.sh@40 -- # local info 00:07:20.178 20:49:48 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:20.178 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.178 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.437 20:49:48 -- rpc/rpc.sh@42 -- # info='{ 00:07:20.437 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid102398", 00:07:20.437 "tpoint_group_mask": "0x8", 00:07:20.437 "iscsi_conn": { 00:07:20.437 "mask": "0x2", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "scsi": { 00:07:20.437 "mask": "0x4", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "bdev": { 00:07:20.437 "mask": "0x8", 00:07:20.437 "tpoint_mask": "0xffffffffffffffff" 00:07:20.437 }, 00:07:20.437 "nvmf_rdma": { 00:07:20.437 "mask": "0x10", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "nvmf_tcp": { 00:07:20.437 "mask": "0x20", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "ftl": { 00:07:20.437 "mask": "0x40", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "blobfs": { 00:07:20.437 "mask": "0x80", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "dsa": { 00:07:20.437 "mask": "0x200", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "thread": { 00:07:20.437 "mask": "0x400", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "nvme_pcie": { 00:07:20.437 "mask": "0x800", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "iaa": { 00:07:20.437 "mask": "0x1000", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "nvme_tcp": { 00:07:20.437 "mask": "0x2000", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 }, 00:07:20.437 "bdev_nvme": { 00:07:20.437 "mask": "0x4000", 00:07:20.437 "tpoint_mask": "0x0" 00:07:20.437 } 00:07:20.437 }' 00:07:20.437 20:49:48 -- rpc/rpc.sh@43 -- # jq length 00:07:20.437 20:49:48 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:20.437 20:49:48 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:20.437 20:49:48 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:20.437 20:49:48 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:20.437 20:49:48 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:20.437 20:49:48 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:20.437 20:49:48 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:20.437 20:49:48 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:20.437 20:49:48 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:20.437 00:07:20.437 real 0m0.252s 00:07:20.437 user 0m0.238s 00:07:20.437 sys 0m0.010s 00:07:20.437 20:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.437 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.437 ************************************ 00:07:20.437 END TEST rpc_trace_cmd_test 00:07:20.437 ************************************ 00:07:20.696 20:49:48 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:20.696 20:49:48 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:20.696 20:49:48 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:20.696 20:49:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.696 20:49:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.696 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.696 ************************************ 00:07:20.696 START TEST rpc_daemon_integrity 00:07:20.696 ************************************ 00:07:20.696 20:49:48 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:20.696 20:49:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:20.696 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.696 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.696 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.696 20:49:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:20.696 20:49:48 -- rpc/rpc.sh@13 -- # jq length 00:07:20.696 20:49:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:20.696 20:49:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:20.696 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.696 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.696 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.696 20:49:48 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:20.696 20:49:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:20.696 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.696 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.696 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.696 20:49:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:20.696 { 00:07:20.696 "name": "Malloc2", 00:07:20.696 "aliases": [ 00:07:20.696 "5f44581f-8417-4d17-b39f-7f77b95432c7" 00:07:20.696 ], 00:07:20.696 "product_name": "Malloc disk", 00:07:20.696 "block_size": 512, 00:07:20.696 "num_blocks": 16384, 00:07:20.696 "uuid": "5f44581f-8417-4d17-b39f-7f77b95432c7", 00:07:20.696 "assigned_rate_limits": { 00:07:20.696 "rw_ios_per_sec": 0, 00:07:20.696 "rw_mbytes_per_sec": 0, 00:07:20.696 "r_mbytes_per_sec": 0, 00:07:20.696 "w_mbytes_per_sec": 0 00:07:20.696 }, 00:07:20.696 "claimed": false, 00:07:20.696 "zoned": false, 00:07:20.696 "supported_io_types": { 00:07:20.696 "read": true, 00:07:20.696 "write": true, 00:07:20.696 "unmap": true, 00:07:20.696 "write_zeroes": true, 00:07:20.696 "flush": true, 00:07:20.696 "reset": true, 00:07:20.696 "compare": false, 00:07:20.696 "compare_and_write": false, 00:07:20.696 "abort": true, 00:07:20.696 "nvme_admin": false, 00:07:20.696 "nvme_io": false 00:07:20.696 }, 00:07:20.696 "memory_domains": [ 00:07:20.696 { 00:07:20.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.697 "dma_device_type": 2 00:07:20.697 } 00:07:20.697 ], 00:07:20.697 "driver_specific": {} 00:07:20.697 } 00:07:20.697 ]' 00:07:20.697 20:49:48 -- rpc/rpc.sh@17 -- # jq length 00:07:20.697 20:49:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:20.697 20:49:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:20.697 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.697 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 [2024-06-09 20:49:48.798032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:20.697 [2024-06-09 20:49:48.798143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.697 [2024-06-09 20:49:48.798200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:20.697 [2024-06-09 20:49:48.798224] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.697 [2024-06-09 20:49:48.800662] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.697 [2024-06-09 20:49:48.800748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:20.697 Passthru0 00:07:20.697 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.697 20:49:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:20.697 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.697 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.697 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.697 20:49:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:20.697 { 00:07:20.697 "name": "Malloc2", 00:07:20.697 "aliases": [ 00:07:20.697 "5f44581f-8417-4d17-b39f-7f77b95432c7" 00:07:20.697 ], 00:07:20.697 "product_name": "Malloc disk", 00:07:20.697 "block_size": 512, 00:07:20.697 "num_blocks": 16384, 00:07:20.697 "uuid": "5f44581f-8417-4d17-b39f-7f77b95432c7", 00:07:20.697 "assigned_rate_limits": { 00:07:20.697 "rw_ios_per_sec": 0, 00:07:20.697 "rw_mbytes_per_sec": 0, 00:07:20.697 "r_mbytes_per_sec": 0, 00:07:20.697 "w_mbytes_per_sec": 0 00:07:20.697 }, 00:07:20.697 "claimed": true, 00:07:20.697 "claim_type": "exclusive_write", 00:07:20.697 "zoned": false, 00:07:20.697 "supported_io_types": { 00:07:20.697 "read": true, 00:07:20.697 "write": true, 00:07:20.697 "unmap": true, 00:07:20.697 "write_zeroes": true, 00:07:20.697 "flush": true, 00:07:20.697 "reset": true, 00:07:20.697 "compare": false, 00:07:20.697 "compare_and_write": false, 00:07:20.697 "abort": true, 00:07:20.697 "nvme_admin": false, 00:07:20.697 "nvme_io": false 00:07:20.697 }, 00:07:20.697 "memory_domains": [ 00:07:20.697 { 00:07:20.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.697 "dma_device_type": 2 00:07:20.697 } 00:07:20.697 ], 00:07:20.697 "driver_specific": {} 00:07:20.697 }, 00:07:20.697 { 00:07:20.697 "name": "Passthru0", 00:07:20.697 "aliases": [ 00:07:20.697 "059c6fa8-7ca4-592b-8bbd-7f1bdf865519" 00:07:20.697 ], 00:07:20.697 "product_name": "passthru", 00:07:20.697 "block_size": 512, 00:07:20.697 "num_blocks": 16384, 00:07:20.697 "uuid": "059c6fa8-7ca4-592b-8bbd-7f1bdf865519", 00:07:20.697 "assigned_rate_limits": { 00:07:20.697 "rw_ios_per_sec": 0, 00:07:20.697 "rw_mbytes_per_sec": 0, 00:07:20.697 "r_mbytes_per_sec": 0, 00:07:20.697 "w_mbytes_per_sec": 0 00:07:20.697 }, 00:07:20.697 "claimed": false, 00:07:20.697 "zoned": false, 00:07:20.697 "supported_io_types": { 00:07:20.697 "read": true, 00:07:20.697 "write": true, 00:07:20.697 "unmap": true, 00:07:20.697 "write_zeroes": true, 00:07:20.697 "flush": true, 00:07:20.697 "reset": true, 00:07:20.697 "compare": false, 00:07:20.697 "compare_and_write": false, 00:07:20.697 "abort": true, 00:07:20.697 "nvme_admin": false, 00:07:20.697 "nvme_io": false 00:07:20.697 }, 00:07:20.697 "memory_domains": [ 00:07:20.697 { 00:07:20.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.697 "dma_device_type": 2 00:07:20.697 } 00:07:20.697 ], 00:07:20.697 "driver_specific": { 00:07:20.697 "passthru": { 00:07:20.697 "name": "Passthru0", 00:07:20.697 "base_bdev_name": "Malloc2" 00:07:20.697 } 00:07:20.697 } 00:07:20.697 } 00:07:20.697 ]' 00:07:20.697 20:49:48 -- rpc/rpc.sh@21 -- # jq length 00:07:20.956 20:49:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:20.956 20:49:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:20.956 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.956 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.956 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.956 20:49:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:20.956 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.956 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.956 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.956 20:49:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:20.956 20:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.956 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.956 20:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.956 20:49:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:20.956 20:49:48 -- rpc/rpc.sh@26 -- # jq length 00:07:20.956 20:49:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:20.956 00:07:20.956 real 0m0.325s 00:07:20.956 user 0m0.215s 00:07:20.956 sys 0m0.026s 00:07:20.956 20:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.956 ************************************ 00:07:20.956 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.956 END TEST rpc_daemon_integrity 00:07:20.956 ************************************ 00:07:20.956 20:49:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:20.956 20:49:49 -- rpc/rpc.sh@84 -- # killprocess 102398 00:07:20.956 20:49:49 -- common/autotest_common.sh@926 -- # '[' -z 102398 ']' 00:07:20.956 20:49:49 -- common/autotest_common.sh@930 -- # kill -0 102398 00:07:20.956 20:49:49 -- common/autotest_common.sh@931 -- # uname 00:07:20.956 20:49:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:20.956 20:49:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102398 00:07:20.956 20:49:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:20.956 20:49:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:20.956 killing process with pid 102398 00:07:20.956 20:49:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102398' 00:07:20.956 20:49:49 -- common/autotest_common.sh@945 -- # kill 102398 00:07:20.956 20:49:49 -- common/autotest_common.sh@950 -- # wait 102398 00:07:22.858 00:07:22.858 real 0m4.965s 00:07:22.858 user 0m5.879s 00:07:22.858 sys 0m0.764s 00:07:22.858 20:49:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.858 20:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:22.858 ************************************ 00:07:22.858 END TEST rpc 00:07:22.858 ************************************ 00:07:22.858 20:49:50 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:22.858 20:49:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:22.858 20:49:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.858 20:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:22.858 ************************************ 00:07:22.858 START TEST rpc_client 00:07:22.858 ************************************ 00:07:22.858 20:49:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:23.117 * Looking for test storage... 00:07:23.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:23.117 20:49:51 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:23.117 OK 00:07:23.117 20:49:51 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:23.117 00:07:23.117 real 0m0.137s 00:07:23.117 user 0m0.084s 00:07:23.117 sys 0m0.065s 00:07:23.117 20:49:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.117 ************************************ 00:07:23.117 END TEST rpc_client 00:07:23.117 ************************************ 00:07:23.117 20:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:23.117 20:49:51 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:23.117 20:49:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:23.117 20:49:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.117 20:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:23.117 ************************************ 00:07:23.117 START TEST json_config 00:07:23.117 ************************************ 00:07:23.117 20:49:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:23.117 20:49:51 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.117 20:49:51 -- nvmf/common.sh@7 -- # uname -s 00:07:23.117 20:49:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.117 20:49:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.117 20:49:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.117 20:49:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.117 20:49:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.117 20:49:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.117 20:49:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.117 20:49:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.117 20:49:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.117 20:49:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.117 20:49:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c22388e3-4d44-4000-a885-f4f931686a91 00:07:23.117 20:49:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=c22388e3-4d44-4000-a885-f4f931686a91 00:07:23.117 20:49:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.117 20:49:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.117 20:49:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:23.117 20:49:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.117 20:49:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.117 20:49:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.117 20:49:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.117 20:49:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:23.117 20:49:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:23.117 20:49:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:23.117 20:49:51 -- paths/export.sh@5 -- # export PATH 00:07:23.117 20:49:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:23.117 20:49:51 -- nvmf/common.sh@46 -- # : 0 00:07:23.117 20:49:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:23.117 20:49:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:23.117 20:49:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:23.117 20:49:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.117 20:49:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.117 20:49:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:23.117 20:49:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:23.117 20:49:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:23.117 20:49:51 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:23.117 20:49:51 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:23.117 20:49:51 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:23.117 20:49:51 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:23.117 20:49:51 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:07:23.117 20:49:51 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:23.117 20:49:51 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:23.117 20:49:51 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:23.117 20:49:51 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:23.117 20:49:51 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:23.117 20:49:51 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:23.117 20:49:51 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:23.117 20:49:51 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:23.117 INFO: JSON configuration test init 00:07:23.117 20:49:51 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:23.117 20:49:51 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:23.117 20:49:51 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:23.117 20:49:51 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:23.117 20:49:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:23.117 20:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:23.117 20:49:51 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:23.117 20:49:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:23.117 20:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:23.117 Waiting for target to run... 00:07:23.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:23.117 20:49:51 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:23.117 20:49:51 -- json_config/json_config.sh@98 -- # local app=target 00:07:23.117 20:49:51 -- json_config/json_config.sh@99 -- # shift 00:07:23.117 20:49:51 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:23.117 20:49:51 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:23.117 20:49:51 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:23.117 20:49:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:23.117 20:49:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:23.117 20:49:51 -- json_config/json_config.sh@111 -- # app_pid[$app]=102703 00:07:23.117 20:49:51 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:23.117 20:49:51 -- json_config/json_config.sh@114 -- # waitforlisten 102703 /var/tmp/spdk_tgt.sock 00:07:23.117 20:49:51 -- common/autotest_common.sh@819 -- # '[' -z 102703 ']' 00:07:23.117 20:49:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:23.117 20:49:51 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:23.117 20:49:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.117 20:49:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:23.117 20:49:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.117 20:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:23.376 [2024-06-09 20:49:51.305327] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:23.376 [2024-06-09 20:49:51.305825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102703 ] 00:07:23.634 [2024-06-09 20:49:51.769847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.893 [2024-06-09 20:49:51.938591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.893 [2024-06-09 20:49:51.939036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.151 00:07:24.151 20:49:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:24.151 20:49:52 -- common/autotest_common.sh@852 -- # return 0 00:07:24.151 20:49:52 -- json_config/json_config.sh@115 -- # echo '' 00:07:24.151 20:49:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:24.151 20:49:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:24.151 20:49:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:24.151 20:49:52 -- common/autotest_common.sh@10 -- # set +x 00:07:24.151 20:49:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:24.151 20:49:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:24.151 20:49:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:24.151 20:49:52 -- common/autotest_common.sh@10 -- # set +x 00:07:24.151 20:49:52 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:24.151 20:49:52 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:24.151 20:49:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:25.084 20:49:53 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:25.084 20:49:53 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:25.084 20:49:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:25.084 20:49:53 -- common/autotest_common.sh@10 -- # set +x 00:07:25.084 20:49:53 -- json_config/json_config.sh@48 -- # local ret=0 00:07:25.084 20:49:53 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:25.084 20:49:53 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:25.084 20:49:53 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:25.084 20:49:53 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:25.084 20:49:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:25.343 20:49:53 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:25.343 20:49:53 -- json_config/json_config.sh@51 -- # local get_types 00:07:25.343 20:49:53 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:25.343 20:49:53 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:25.343 20:49:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:25.343 20:49:53 -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 20:49:53 -- json_config/json_config.sh@58 -- # return 0 00:07:25.343 20:49:53 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:07:25.343 20:49:53 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:07:25.343 20:49:53 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:07:25.343 20:49:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:25.343 20:49:53 -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 20:49:53 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:07:25.343 20:49:53 -- json_config/json_config.sh@160 -- # local expected_notifications 00:07:25.343 20:49:53 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:07:25.343 20:49:53 -- json_config/json_config.sh@164 -- # get_notifications 00:07:25.343 20:49:53 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:25.343 20:49:53 -- json_config/json_config.sh@64 -- # IFS=: 00:07:25.343 20:49:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:25.343 20:49:53 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:25.343 20:49:53 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:25.343 20:49:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:25.602 20:49:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:25.602 20:49:53 -- json_config/json_config.sh@64 -- # IFS=: 00:07:25.602 20:49:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:25.602 20:49:53 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:07:25.602 20:49:53 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:07:25.602 20:49:53 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:25.602 20:49:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:25.860 Nvme0n1p0 Nvme0n1p1 00:07:25.860 20:49:53 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:25.860 20:49:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:26.119 [2024-06-09 20:49:54.053239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:26.119 [2024-06-09 20:49:54.053493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:26.119 00:07:26.119 20:49:54 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:26.119 20:49:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:26.119 Malloc3 00:07:26.119 20:49:54 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:26.119 20:49:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:26.377 [2024-06-09 20:49:54.440844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:26.377 [2024-06-09 20:49:54.441090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.377 [2024-06-09 20:49:54.441170] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:26.377 [2024-06-09 20:49:54.441300] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.377 [2024-06-09 20:49:54.444063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.377 [2024-06-09 20:49:54.444264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:26.377 PTBdevFromMalloc3 00:07:26.377 20:49:54 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:26.377 20:49:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:26.636 Null0 00:07:26.636 20:49:54 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:26.636 20:49:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:26.895 Malloc0 00:07:26.896 20:49:54 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:26.896 20:49:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:26.896 Malloc1 00:07:26.896 20:49:55 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:26.896 20:49:55 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:27.462 102400+0 records in 00:07:27.462 102400+0 records out 00:07:27.462 104857600 bytes (105 MB, 100 MiB) copied, 0.296768 s, 353 MB/s 00:07:27.462 20:49:55 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:27.462 20:49:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:27.462 aio_disk 00:07:27.462 20:49:55 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:27.462 20:49:55 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:27.462 20:49:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:27.720 360f7ca1-ceaf-494c-8905-6f7a4b0792d4 00:07:27.720 20:49:55 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:27.720 20:49:55 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:27.720 20:49:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:27.978 20:49:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:27.978 20:49:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:28.237 20:49:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:28.237 20:49:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:28.495 20:49:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:28.495 20:49:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:28.495 20:49:56 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:07:28.495 20:49:56 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:07:28.495 20:49:56 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d1ab08be-aef3-4443-98a2-1ef2a4b6a626 bdev_register:aac838e2-a7d7-463e-aa81-f506a19b508f bdev_register:1cb73a24-e567-4ecf-bf33-a410d919b079 bdev_register:98585084-5716-4c00-ba3b-8b38c7689817 00:07:28.495 20:49:56 -- json_config/json_config.sh@70 -- # local events_to_check 00:07:28.495 20:49:56 -- json_config/json_config.sh@71 -- # local recorded_events 00:07:28.495 20:49:56 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:28.495 20:49:56 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d1ab08be-aef3-4443-98a2-1ef2a4b6a626 bdev_register:aac838e2-a7d7-463e-aa81-f506a19b508f bdev_register:1cb73a24-e567-4ecf-bf33-a410d919b079 bdev_register:98585084-5716-4c00-ba3b-8b38c7689817 00:07:28.495 20:49:56 -- json_config/json_config.sh@74 -- # sort 00:07:28.754 20:49:56 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:07:28.754 20:49:56 -- json_config/json_config.sh@75 -- # sort 00:07:28.754 20:49:56 -- json_config/json_config.sh@75 -- # get_notifications 00:07:28.754 20:49:56 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:28.754 20:49:56 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:28.754 20:49:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:28.754 20:49:56 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:28.754 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:28.754 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:28.754 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:28.754 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:28.754 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:07:28.754 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:d1ab08be-aef3-4443-98a2-1ef2a4b6a626 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:aac838e2-a7d7-463e-aa81-f506a19b508f 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:1cb73a24-e567-4ecf-bf33-a410d919b079 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@65 -- # echo bdev_register:98585084-5716-4c00-ba3b-8b38c7689817 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # IFS=: 00:07:29.012 20:49:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:29.012 20:49:56 -- json_config/json_config.sh@77 -- # [[ bdev_register:1cb73a24-e567-4ecf-bf33-a410d919b079 bdev_register:98585084-5716-4c00-ba3b-8b38c7689817 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aac838e2-a7d7-463e-aa81-f506a19b508f bdev_register:aio_disk bdev_register:d1ab08be-aef3-4443-98a2-1ef2a4b6a626 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\c\b\7\3\a\2\4\-\e\5\6\7\-\4\e\c\f\-\b\f\3\3\-\a\4\1\0\d\9\1\9\b\0\7\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\8\5\8\5\0\8\4\-\5\7\1\6\-\4\c\0\0\-\b\a\3\b\-\8\b\3\8\c\7\6\8\9\8\1\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\a\c\8\3\8\e\2\-\a\7\d\7\-\4\6\3\e\-\a\a\8\1\-\f\5\0\6\a\1\9\b\5\0\8\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\1\a\b\0\8\b\e\-\a\e\f\3\-\4\4\4\3\-\9\8\a\2\-\1\e\f\2\a\4\b\6\a\6\2\6 ]] 00:07:29.012 20:49:56 -- json_config/json_config.sh@89 -- # cat 00:07:29.013 20:49:56 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:1cb73a24-e567-4ecf-bf33-a410d919b079 bdev_register:98585084-5716-4c00-ba3b-8b38c7689817 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aac838e2-a7d7-463e-aa81-f506a19b508f bdev_register:aio_disk bdev_register:d1ab08be-aef3-4443-98a2-1ef2a4b6a626 00:07:29.013 Expected events matched: 00:07:29.013 bdev_register:1cb73a24-e567-4ecf-bf33-a410d919b079 00:07:29.013 bdev_register:98585084-5716-4c00-ba3b-8b38c7689817 00:07:29.013 bdev_register:Malloc0 00:07:29.013 bdev_register:Malloc0p0 00:07:29.013 bdev_register:Malloc0p1 00:07:29.013 bdev_register:Malloc0p2 00:07:29.013 bdev_register:Malloc1 00:07:29.013 bdev_register:Malloc3 00:07:29.013 bdev_register:Null0 00:07:29.013 bdev_register:Nvme0n1 00:07:29.013 bdev_register:Nvme0n1p0 00:07:29.013 bdev_register:Nvme0n1p1 00:07:29.013 bdev_register:PTBdevFromMalloc3 00:07:29.013 bdev_register:aac838e2-a7d7-463e-aa81-f506a19b508f 00:07:29.013 bdev_register:aio_disk 00:07:29.013 bdev_register:d1ab08be-aef3-4443-98a2-1ef2a4b6a626 00:07:29.013 20:49:56 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:07:29.013 20:49:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:29.013 20:49:56 -- common/autotest_common.sh@10 -- # set +x 00:07:29.013 20:49:56 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:29.013 20:49:56 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:29.013 20:49:56 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:07:29.013 20:49:56 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:29.013 20:49:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:29.013 20:49:56 -- common/autotest_common.sh@10 -- # set +x 00:07:29.013 20:49:57 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:29.013 20:49:57 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:29.013 20:49:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:29.270 MallocBdevForConfigChangeCheck 00:07:29.271 20:49:57 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:29.271 20:49:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:29.271 20:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:29.271 20:49:57 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:29.271 20:49:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:29.528 INFO: shutting down applications... 00:07:29.528 20:49:57 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:29.528 20:49:57 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:29.528 20:49:57 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:29.528 20:49:57 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:29.528 20:49:57 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:29.787 [2024-06-09 20:49:57.797718] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:30.045 Calling clear_vhost_scsi_subsystem 00:07:30.045 Calling clear_iscsi_subsystem 00:07:30.045 Calling clear_vhost_blk_subsystem 00:07:30.045 Calling clear_nbd_subsystem 00:07:30.045 Calling clear_nvmf_subsystem 00:07:30.045 Calling clear_bdev_subsystem 00:07:30.045 Calling clear_accel_subsystem 00:07:30.045 Calling clear_iobuf_subsystem 00:07:30.045 Calling clear_sock_subsystem 00:07:30.045 Calling clear_vmd_subsystem 00:07:30.045 Calling clear_scheduler_subsystem 00:07:30.045 20:49:57 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:30.045 20:49:57 -- json_config/json_config.sh@396 -- # count=100 00:07:30.045 20:49:57 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:30.045 20:49:57 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:30.045 20:49:57 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:30.045 20:49:57 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:30.303 20:49:58 -- json_config/json_config.sh@398 -- # break 00:07:30.303 20:49:58 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:30.303 20:49:58 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:30.303 20:49:58 -- json_config/json_config.sh@120 -- # local app=target 00:07:30.303 20:49:58 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:30.303 20:49:58 -- json_config/json_config.sh@124 -- # [[ -n 102703 ]] 00:07:30.303 20:49:58 -- json_config/json_config.sh@127 -- # kill -SIGINT 102703 00:07:30.303 20:49:58 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:30.303 20:49:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:30.303 20:49:58 -- json_config/json_config.sh@130 -- # kill -0 102703 00:07:30.303 20:49:58 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:30.870 20:49:58 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:30.870 20:49:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:30.870 20:49:58 -- json_config/json_config.sh@130 -- # kill -0 102703 00:07:30.870 20:49:58 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:31.437 SPDK target shutdown done 00:07:31.437 INFO: relaunching applications... 00:07:31.437 20:49:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:31.437 20:49:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:31.437 20:49:59 -- json_config/json_config.sh@130 -- # kill -0 102703 00:07:31.437 20:49:59 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:31.437 20:49:59 -- json_config/json_config.sh@132 -- # break 00:07:31.437 20:49:59 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:31.437 20:49:59 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:31.437 20:49:59 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:31.437 20:49:59 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:31.437 20:49:59 -- json_config/json_config.sh@98 -- # local app=target 00:07:31.438 20:49:59 -- json_config/json_config.sh@99 -- # shift 00:07:31.438 20:49:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:31.438 20:49:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:31.438 20:49:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:31.438 20:49:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:31.438 20:49:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:31.438 Waiting for target to run... 00:07:31.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:31.438 20:49:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=102953 00:07:31.438 20:49:59 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:31.438 20:49:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:31.438 20:49:59 -- json_config/json_config.sh@114 -- # waitforlisten 102953 /var/tmp/spdk_tgt.sock 00:07:31.438 20:49:59 -- common/autotest_common.sh@819 -- # '[' -z 102953 ']' 00:07:31.438 20:49:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:31.438 20:49:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:31.438 20:49:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:31.438 20:49:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:31.438 20:49:59 -- common/autotest_common.sh@10 -- # set +x 00:07:31.438 [2024-06-09 20:49:59.390843] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:31.438 [2024-06-09 20:49:59.391275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102953 ] 00:07:31.697 [2024-06-09 20:49:59.826279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.955 [2024-06-09 20:49:59.989034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:31.955 [2024-06-09 20:49:59.989462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.889 [2024-06-09 20:50:00.723655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:32.889 [2024-06-09 20:50:00.723998] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:32.889 [2024-06-09 20:50:00.731686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:32.889 [2024-06-09 20:50:00.731910] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:32.889 [2024-06-09 20:50:00.739710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:32.889 [2024-06-09 20:50:00.739930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:32.889 [2024-06-09 20:50:00.740094] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:32.889 [2024-06-09 20:50:00.833557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:32.889 [2024-06-09 20:50:00.833819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.889 [2024-06-09 20:50:00.834022] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:32.889 [2024-06-09 20:50:00.834260] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.889 [2024-06-09 20:50:00.834840] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.889 [2024-06-09 20:50:00.835072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:33.148 00:07:33.148 INFO: Checking if target configuration is the same... 00:07:33.148 20:50:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.148 20:50:01 -- common/autotest_common.sh@852 -- # return 0 00:07:33.148 20:50:01 -- json_config/json_config.sh@115 -- # echo '' 00:07:33.148 20:50:01 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:33.148 20:50:01 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:33.148 20:50:01 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:33.148 20:50:01 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:33.148 20:50:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.148 + '[' 2 -ne 2 ']' 00:07:33.148 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:33.148 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:33.148 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:33.148 +++ basename /dev/fd/62 00:07:33.148 ++ mktemp /tmp/62.XXX 00:07:33.148 + tmp_file_1=/tmp/62.4Qc 00:07:33.148 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:33.148 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:33.148 + tmp_file_2=/tmp/spdk_tgt_config.json.39A 00:07:33.148 + ret=0 00:07:33.148 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:33.405 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:33.405 + diff -u /tmp/62.4Qc /tmp/spdk_tgt_config.json.39A 00:07:33.405 + echo 'INFO: JSON config files are the same' 00:07:33.405 INFO: JSON config files are the same 00:07:33.405 + rm /tmp/62.4Qc /tmp/spdk_tgt_config.json.39A 00:07:33.405 + exit 0 00:07:33.405 INFO: changing configuration and checking if this can be detected... 00:07:33.405 20:50:01 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:33.405 20:50:01 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:33.405 20:50:01 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.405 20:50:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.972 20:50:01 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:33.972 20:50:01 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:33.972 20:50:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.972 + '[' 2 -ne 2 ']' 00:07:33.972 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:33.972 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:33.972 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:33.972 +++ basename /dev/fd/62 00:07:33.972 ++ mktemp /tmp/62.XXX 00:07:33.972 + tmp_file_1=/tmp/62.IA6 00:07:33.972 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:33.972 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:33.972 + tmp_file_2=/tmp/spdk_tgt_config.json.2MX 00:07:33.972 + ret=0 00:07:33.972 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:34.230 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:34.230 + diff -u /tmp/62.IA6 /tmp/spdk_tgt_config.json.2MX 00:07:34.230 + ret=1 00:07:34.230 + echo '=== Start of file: /tmp/62.IA6 ===' 00:07:34.230 + cat /tmp/62.IA6 00:07:34.230 + echo '=== End of file: /tmp/62.IA6 ===' 00:07:34.230 + echo '' 00:07:34.230 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2MX ===' 00:07:34.230 + cat /tmp/spdk_tgt_config.json.2MX 00:07:34.230 + echo '=== End of file: /tmp/spdk_tgt_config.json.2MX ===' 00:07:34.230 + echo '' 00:07:34.230 + rm /tmp/62.IA6 /tmp/spdk_tgt_config.json.2MX 00:07:34.230 + exit 1 00:07:34.230 INFO: configuration change detected. 00:07:34.230 20:50:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:34.230 20:50:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:34.230 20:50:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:34.230 20:50:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.230 20:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:34.230 20:50:02 -- json_config/json_config.sh@360 -- # local ret=0 00:07:34.230 20:50:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:34.230 20:50:02 -- json_config/json_config.sh@370 -- # [[ -n 102953 ]] 00:07:34.230 20:50:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:34.230 20:50:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:34.230 20:50:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.230 20:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:34.230 20:50:02 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:07:34.230 20:50:02 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:34.230 20:50:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:34.487 20:50:02 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:34.487 20:50:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:34.745 20:50:02 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:34.745 20:50:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:35.003 20:50:03 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:35.003 20:50:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:35.261 20:50:03 -- json_config/json_config.sh@246 -- # uname -s 00:07:35.261 20:50:03 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:35.261 20:50:03 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:35.261 20:50:03 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:35.261 20:50:03 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:35.261 20:50:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:35.261 20:50:03 -- common/autotest_common.sh@10 -- # set +x 00:07:35.520 20:50:03 -- json_config/json_config.sh@376 -- # killprocess 102953 00:07:35.520 20:50:03 -- common/autotest_common.sh@926 -- # '[' -z 102953 ']' 00:07:35.520 20:50:03 -- common/autotest_common.sh@930 -- # kill -0 102953 00:07:35.520 20:50:03 -- common/autotest_common.sh@931 -- # uname 00:07:35.520 20:50:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:35.520 20:50:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102953 00:07:35.520 20:50:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:35.520 20:50:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:35.520 20:50:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102953' 00:07:35.520 killing process with pid 102953 00:07:35.520 20:50:03 -- common/autotest_common.sh@945 -- # kill 102953 00:07:35.520 20:50:03 -- common/autotest_common.sh@950 -- # wait 102953 00:07:36.456 20:50:04 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:36.456 20:50:04 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:36.456 20:50:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:36.456 20:50:04 -- common/autotest_common.sh@10 -- # set +x 00:07:36.456 INFO: Success 00:07:36.456 20:50:04 -- json_config/json_config.sh@381 -- # return 0 00:07:36.456 20:50:04 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:36.456 00:07:36.457 real 0m13.460s 00:07:36.457 user 0m19.096s 00:07:36.457 sys 0m2.509s 00:07:36.457 20:50:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.457 20:50:04 -- common/autotest_common.sh@10 -- # set +x 00:07:36.457 ************************************ 00:07:36.457 END TEST json_config 00:07:36.457 ************************************ 00:07:36.715 20:50:04 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:36.715 20:50:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.715 20:50:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.715 20:50:04 -- common/autotest_common.sh@10 -- # set +x 00:07:36.715 ************************************ 00:07:36.715 START TEST json_config_extra_key 00:07:36.715 ************************************ 00:07:36.715 20:50:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:36.715 20:50:04 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.715 20:50:04 -- nvmf/common.sh@7 -- # uname -s 00:07:36.715 20:50:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.715 20:50:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.715 20:50:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.715 20:50:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.715 20:50:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.715 20:50:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.715 20:50:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.716 20:50:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.716 20:50:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.716 20:50:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.716 20:50:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3de85860-b575-4cd2-98c1-4ff75ff527d5 00:07:36.716 20:50:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=3de85860-b575-4cd2-98c1-4ff75ff527d5 00:07:36.716 20:50:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.716 20:50:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.716 20:50:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:36.716 20:50:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.716 20:50:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.716 20:50:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.716 20:50:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.716 20:50:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:36.716 20:50:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:36.716 20:50:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:36.716 20:50:04 -- paths/export.sh@5 -- # export PATH 00:07:36.716 20:50:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:36.716 20:50:04 -- nvmf/common.sh@46 -- # : 0 00:07:36.716 20:50:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:36.716 20:50:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:36.716 20:50:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:36.716 20:50:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.716 20:50:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.716 20:50:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:36.716 20:50:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:36.716 20:50:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:36.716 INFO: launching applications... 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=103137 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:36.716 Waiting for target to run... 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 103137 /var/tmp/spdk_tgt.sock 00:07:36.716 20:50:04 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:36.716 20:50:04 -- common/autotest_common.sh@819 -- # '[' -z 103137 ']' 00:07:36.716 20:50:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:36.716 20:50:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:36.716 20:50:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:36.716 20:50:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.716 20:50:04 -- common/autotest_common.sh@10 -- # set +x 00:07:36.716 [2024-06-09 20:50:04.826328] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:36.716 [2024-06-09 20:50:04.827183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103137 ] 00:07:37.283 [2024-06-09 20:50:05.287712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.283 [2024-06-09 20:50:05.444071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.283 [2024-06-09 20:50:05.444329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.665 20:50:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:38.665 20:50:06 -- common/autotest_common.sh@852 -- # return 0 00:07:38.665 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:38.665 INFO: shutting down applications... 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 103137 ]] 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 103137 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103137 00:07:38.665 20:50:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:38.924 20:50:07 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:38.924 20:50:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:38.924 20:50:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103137 00:07:38.924 20:50:07 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:39.492 20:50:07 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:39.492 20:50:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:39.492 20:50:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103137 00:07:39.492 20:50:07 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:40.059 20:50:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:40.059 20:50:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:40.059 20:50:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103137 00:07:40.059 20:50:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:40.627 20:50:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:40.627 20:50:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:40.627 20:50:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103137 00:07:40.627 20:50:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103137 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:40.886 SPDK target shutdown done 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:40.886 Success 00:07:40.886 20:50:09 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:40.886 00:07:40.886 real 0m4.363s 00:07:40.886 user 0m4.035s 00:07:40.886 sys 0m0.578s 00:07:40.886 20:50:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.886 20:50:09 -- common/autotest_common.sh@10 -- # set +x 00:07:40.886 ************************************ 00:07:40.886 END TEST json_config_extra_key 00:07:40.886 ************************************ 00:07:41.145 20:50:09 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:41.145 20:50:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.145 20:50:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.145 20:50:09 -- common/autotest_common.sh@10 -- # set +x 00:07:41.145 ************************************ 00:07:41.145 START TEST alias_rpc 00:07:41.145 ************************************ 00:07:41.145 20:50:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:41.145 * Looking for test storage... 00:07:41.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:41.145 20:50:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:41.145 20:50:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103252 00:07:41.145 20:50:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103252 00:07:41.145 20:50:09 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:41.145 20:50:09 -- common/autotest_common.sh@819 -- # '[' -z 103252 ']' 00:07:41.145 20:50:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.145 20:50:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.145 20:50:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.145 20:50:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.145 20:50:09 -- common/autotest_common.sh@10 -- # set +x 00:07:41.145 [2024-06-09 20:50:09.218436] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:41.145 [2024-06-09 20:50:09.218619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103252 ] 00:07:41.404 [2024-06-09 20:50:09.372095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.404 [2024-06-09 20:50:09.547883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.404 [2024-06-09 20:50:09.548114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.783 20:50:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.783 20:50:10 -- common/autotest_common.sh@852 -- # return 0 00:07:42.783 20:50:10 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:43.042 20:50:11 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 103252 00:07:43.042 20:50:11 -- common/autotest_common.sh@926 -- # '[' -z 103252 ']' 00:07:43.042 20:50:11 -- common/autotest_common.sh@930 -- # kill -0 103252 00:07:43.042 20:50:11 -- common/autotest_common.sh@931 -- # uname 00:07:43.042 20:50:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:43.042 20:50:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103252 00:07:43.042 20:50:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:43.042 20:50:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:43.042 killing process with pid 103252 00:07:43.042 20:50:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103252' 00:07:43.042 20:50:11 -- common/autotest_common.sh@945 -- # kill 103252 00:07:43.042 20:50:11 -- common/autotest_common.sh@950 -- # wait 103252 00:07:45.576 ************************************ 00:07:45.576 END TEST alias_rpc 00:07:45.576 ************************************ 00:07:45.576 00:07:45.576 real 0m4.032s 00:07:45.576 user 0m4.382s 00:07:45.576 sys 0m0.528s 00:07:45.576 20:50:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.576 20:50:13 -- common/autotest_common.sh@10 -- # set +x 00:07:45.576 20:50:13 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:07:45.576 20:50:13 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:45.576 20:50:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.576 20:50:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.576 20:50:13 -- common/autotest_common.sh@10 -- # set +x 00:07:45.576 ************************************ 00:07:45.576 START TEST spdkcli_tcp 00:07:45.576 ************************************ 00:07:45.576 20:50:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:45.576 * Looking for test storage... 00:07:45.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:45.576 20:50:13 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:45.576 20:50:13 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:45.576 20:50:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:45.576 20:50:13 -- common/autotest_common.sh@10 -- # set +x 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=103359 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:45.576 20:50:13 -- spdkcli/tcp.sh@27 -- # waitforlisten 103359 00:07:45.576 20:50:13 -- common/autotest_common.sh@819 -- # '[' -z 103359 ']' 00:07:45.576 20:50:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.576 20:50:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:45.576 20:50:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.576 20:50:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:45.576 20:50:13 -- common/autotest_common.sh@10 -- # set +x 00:07:45.576 [2024-06-09 20:50:13.326929] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:45.576 [2024-06-09 20:50:13.327634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103359 ] 00:07:45.576 [2024-06-09 20:50:13.496318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.576 [2024-06-09 20:50:13.689165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.576 [2024-06-09 20:50:13.689584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.576 [2024-06-09 20:50:13.689617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.954 20:50:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:46.954 20:50:14 -- common/autotest_common.sh@852 -- # return 0 00:07:46.954 20:50:14 -- spdkcli/tcp.sh@31 -- # socat_pid=103395 00:07:46.954 20:50:14 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:46.954 20:50:14 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:47.213 [ 00:07:47.213 "spdk_get_version", 00:07:47.213 "rpc_get_methods", 00:07:47.213 "trace_get_info", 00:07:47.213 "trace_get_tpoint_group_mask", 00:07:47.213 "trace_disable_tpoint_group", 00:07:47.213 "trace_enable_tpoint_group", 00:07:47.213 "trace_clear_tpoint_mask", 00:07:47.213 "trace_set_tpoint_mask", 00:07:47.213 "framework_get_pci_devices", 00:07:47.213 "framework_get_config", 00:07:47.213 "framework_get_subsystems", 00:07:47.213 "iobuf_get_stats", 00:07:47.213 "iobuf_set_options", 00:07:47.213 "sock_set_default_impl", 00:07:47.214 "sock_impl_set_options", 00:07:47.214 "sock_impl_get_options", 00:07:47.214 "vmd_rescan", 00:07:47.214 "vmd_remove_device", 00:07:47.214 "vmd_enable", 00:07:47.214 "accel_get_stats", 00:07:47.214 "accel_set_options", 00:07:47.214 "accel_set_driver", 00:07:47.214 "accel_crypto_key_destroy", 00:07:47.214 "accel_crypto_keys_get", 00:07:47.214 "accel_crypto_key_create", 00:07:47.214 "accel_assign_opc", 00:07:47.214 "accel_get_module_info", 00:07:47.214 "accel_get_opc_assignments", 00:07:47.214 "notify_get_notifications", 00:07:47.214 "notify_get_types", 00:07:47.214 "bdev_get_histogram", 00:07:47.214 "bdev_enable_histogram", 00:07:47.214 "bdev_set_qos_limit", 00:07:47.214 "bdev_set_qd_sampling_period", 00:07:47.214 "bdev_get_bdevs", 00:07:47.214 "bdev_reset_iostat", 00:07:47.214 "bdev_get_iostat", 00:07:47.214 "bdev_examine", 00:07:47.214 "bdev_wait_for_examine", 00:07:47.214 "bdev_set_options", 00:07:47.214 "scsi_get_devices", 00:07:47.214 "thread_set_cpumask", 00:07:47.214 "framework_get_scheduler", 00:07:47.214 "framework_set_scheduler", 00:07:47.214 "framework_get_reactors", 00:07:47.214 "thread_get_io_channels", 00:07:47.214 "thread_get_pollers", 00:07:47.214 "thread_get_stats", 00:07:47.214 "framework_monitor_context_switch", 00:07:47.214 "spdk_kill_instance", 00:07:47.214 "log_enable_timestamps", 00:07:47.214 "log_get_flags", 00:07:47.214 "log_clear_flag", 00:07:47.214 "log_set_flag", 00:07:47.214 "log_get_level", 00:07:47.214 "log_set_level", 00:07:47.214 "log_get_print_level", 00:07:47.214 "log_set_print_level", 00:07:47.214 "framework_enable_cpumask_locks", 00:07:47.214 "framework_disable_cpumask_locks", 00:07:47.214 "framework_wait_init", 00:07:47.214 "framework_start_init", 00:07:47.214 "virtio_blk_create_transport", 00:07:47.214 "virtio_blk_get_transports", 00:07:47.214 "vhost_controller_set_coalescing", 00:07:47.214 "vhost_get_controllers", 00:07:47.214 "vhost_delete_controller", 00:07:47.214 "vhost_create_blk_controller", 00:07:47.214 "vhost_scsi_controller_remove_target", 00:07:47.214 "vhost_scsi_controller_add_target", 00:07:47.214 "vhost_start_scsi_controller", 00:07:47.214 "vhost_create_scsi_controller", 00:07:47.214 "nbd_get_disks", 00:07:47.214 "nbd_stop_disk", 00:07:47.214 "nbd_start_disk", 00:07:47.214 "env_dpdk_get_mem_stats", 00:07:47.214 "nvmf_subsystem_get_listeners", 00:07:47.214 "nvmf_subsystem_get_qpairs", 00:07:47.214 "nvmf_subsystem_get_controllers", 00:07:47.214 "nvmf_get_stats", 00:07:47.214 "nvmf_get_transports", 00:07:47.214 "nvmf_create_transport", 00:07:47.214 "nvmf_get_targets", 00:07:47.214 "nvmf_delete_target", 00:07:47.214 "nvmf_create_target", 00:07:47.214 "nvmf_subsystem_allow_any_host", 00:07:47.214 "nvmf_subsystem_remove_host", 00:07:47.214 "nvmf_subsystem_add_host", 00:07:47.214 "nvmf_subsystem_remove_ns", 00:07:47.214 "nvmf_subsystem_add_ns", 00:07:47.214 "nvmf_subsystem_listener_set_ana_state", 00:07:47.214 "nvmf_discovery_get_referrals", 00:07:47.214 "nvmf_discovery_remove_referral", 00:07:47.214 "nvmf_discovery_add_referral", 00:07:47.214 "nvmf_subsystem_remove_listener", 00:07:47.214 "nvmf_subsystem_add_listener", 00:07:47.214 "nvmf_delete_subsystem", 00:07:47.214 "nvmf_create_subsystem", 00:07:47.214 "nvmf_get_subsystems", 00:07:47.214 "nvmf_set_crdt", 00:07:47.214 "nvmf_set_config", 00:07:47.214 "nvmf_set_max_subsystems", 00:07:47.214 "iscsi_set_options", 00:07:47.214 "iscsi_get_auth_groups", 00:07:47.214 "iscsi_auth_group_remove_secret", 00:07:47.214 "iscsi_auth_group_add_secret", 00:07:47.214 "iscsi_delete_auth_group", 00:07:47.214 "iscsi_create_auth_group", 00:07:47.214 "iscsi_set_discovery_auth", 00:07:47.214 "iscsi_get_options", 00:07:47.214 "iscsi_target_node_request_logout", 00:07:47.214 "iscsi_target_node_set_redirect", 00:07:47.214 "iscsi_target_node_set_auth", 00:07:47.214 "iscsi_target_node_add_lun", 00:07:47.214 "iscsi_get_connections", 00:07:47.214 "iscsi_portal_group_set_auth", 00:07:47.214 "iscsi_start_portal_group", 00:07:47.214 "iscsi_delete_portal_group", 00:07:47.214 "iscsi_create_portal_group", 00:07:47.214 "iscsi_get_portal_groups", 00:07:47.214 "iscsi_delete_target_node", 00:07:47.214 "iscsi_target_node_remove_pg_ig_maps", 00:07:47.214 "iscsi_target_node_add_pg_ig_maps", 00:07:47.214 "iscsi_create_target_node", 00:07:47.214 "iscsi_get_target_nodes", 00:07:47.214 "iscsi_delete_initiator_group", 00:07:47.214 "iscsi_initiator_group_remove_initiators", 00:07:47.214 "iscsi_initiator_group_add_initiators", 00:07:47.214 "iscsi_create_initiator_group", 00:07:47.214 "iscsi_get_initiator_groups", 00:07:47.214 "iaa_scan_accel_module", 00:07:47.214 "dsa_scan_accel_module", 00:07:47.214 "ioat_scan_accel_module", 00:07:47.214 "accel_error_inject_error", 00:07:47.214 "bdev_iscsi_delete", 00:07:47.214 "bdev_iscsi_create", 00:07:47.214 "bdev_iscsi_set_options", 00:07:47.214 "bdev_virtio_attach_controller", 00:07:47.214 "bdev_virtio_scsi_get_devices", 00:07:47.214 "bdev_virtio_detach_controller", 00:07:47.214 "bdev_virtio_blk_set_hotplug", 00:07:47.214 "bdev_ftl_set_property", 00:07:47.214 "bdev_ftl_get_properties", 00:07:47.214 "bdev_ftl_get_stats", 00:07:47.214 "bdev_ftl_unmap", 00:07:47.214 "bdev_ftl_unload", 00:07:47.214 "bdev_ftl_delete", 00:07:47.214 "bdev_ftl_load", 00:07:47.214 "bdev_ftl_create", 00:07:47.214 "bdev_aio_delete", 00:07:47.214 "bdev_aio_rescan", 00:07:47.214 "bdev_aio_create", 00:07:47.214 "blobfs_create", 00:07:47.214 "blobfs_detect", 00:07:47.214 "blobfs_set_cache_size", 00:07:47.214 "bdev_zone_block_delete", 00:07:47.214 "bdev_zone_block_create", 00:07:47.214 "bdev_delay_delete", 00:07:47.214 "bdev_delay_create", 00:07:47.214 "bdev_delay_update_latency", 00:07:47.214 "bdev_split_delete", 00:07:47.214 "bdev_split_create", 00:07:47.214 "bdev_error_inject_error", 00:07:47.214 "bdev_error_delete", 00:07:47.214 "bdev_error_create", 00:07:47.214 "bdev_raid_set_options", 00:07:47.214 "bdev_raid_remove_base_bdev", 00:07:47.214 "bdev_raid_add_base_bdev", 00:07:47.214 "bdev_raid_delete", 00:07:47.214 "bdev_raid_create", 00:07:47.214 "bdev_raid_get_bdevs", 00:07:47.214 "bdev_lvol_grow_lvstore", 00:07:47.214 "bdev_lvol_get_lvols", 00:07:47.214 "bdev_lvol_get_lvstores", 00:07:47.214 "bdev_lvol_delete", 00:07:47.214 "bdev_lvol_set_read_only", 00:07:47.214 "bdev_lvol_resize", 00:07:47.214 "bdev_lvol_decouple_parent", 00:07:47.214 "bdev_lvol_inflate", 00:07:47.214 "bdev_lvol_rename", 00:07:47.214 "bdev_lvol_clone_bdev", 00:07:47.214 "bdev_lvol_clone", 00:07:47.214 "bdev_lvol_snapshot", 00:07:47.214 "bdev_lvol_create", 00:07:47.214 "bdev_lvol_delete_lvstore", 00:07:47.214 "bdev_lvol_rename_lvstore", 00:07:47.214 "bdev_lvol_create_lvstore", 00:07:47.214 "bdev_passthru_delete", 00:07:47.214 "bdev_passthru_create", 00:07:47.214 "bdev_nvme_cuse_unregister", 00:07:47.214 "bdev_nvme_cuse_register", 00:07:47.214 "bdev_opal_new_user", 00:07:47.214 "bdev_opal_set_lock_state", 00:07:47.214 "bdev_opal_delete", 00:07:47.214 "bdev_opal_get_info", 00:07:47.214 "bdev_opal_create", 00:07:47.214 "bdev_nvme_opal_revert", 00:07:47.214 "bdev_nvme_opal_init", 00:07:47.214 "bdev_nvme_send_cmd", 00:07:47.214 "bdev_nvme_get_path_iostat", 00:07:47.214 "bdev_nvme_get_mdns_discovery_info", 00:07:47.214 "bdev_nvme_stop_mdns_discovery", 00:07:47.214 "bdev_nvme_start_mdns_discovery", 00:07:47.214 "bdev_nvme_set_multipath_policy", 00:07:47.214 "bdev_nvme_set_preferred_path", 00:07:47.214 "bdev_nvme_get_io_paths", 00:07:47.214 "bdev_nvme_remove_error_injection", 00:07:47.214 "bdev_nvme_add_error_injection", 00:07:47.214 "bdev_nvme_get_discovery_info", 00:07:47.214 "bdev_nvme_stop_discovery", 00:07:47.214 "bdev_nvme_start_discovery", 00:07:47.214 "bdev_nvme_get_controller_health_info", 00:07:47.214 "bdev_nvme_disable_controller", 00:07:47.214 "bdev_nvme_enable_controller", 00:07:47.214 "bdev_nvme_reset_controller", 00:07:47.214 "bdev_nvme_get_transport_statistics", 00:07:47.214 "bdev_nvme_apply_firmware", 00:07:47.214 "bdev_nvme_detach_controller", 00:07:47.214 "bdev_nvme_get_controllers", 00:07:47.214 "bdev_nvme_attach_controller", 00:07:47.214 "bdev_nvme_set_hotplug", 00:07:47.214 "bdev_nvme_set_options", 00:07:47.214 "bdev_null_resize", 00:07:47.214 "bdev_null_delete", 00:07:47.214 "bdev_null_create", 00:07:47.214 "bdev_malloc_delete", 00:07:47.214 "bdev_malloc_create" 00:07:47.214 ] 00:07:47.214 20:50:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:47.214 20:50:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:47.214 20:50:15 -- common/autotest_common.sh@10 -- # set +x 00:07:47.214 20:50:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:47.214 20:50:15 -- spdkcli/tcp.sh@38 -- # killprocess 103359 00:07:47.214 20:50:15 -- common/autotest_common.sh@926 -- # '[' -z 103359 ']' 00:07:47.214 20:50:15 -- common/autotest_common.sh@930 -- # kill -0 103359 00:07:47.214 20:50:15 -- common/autotest_common.sh@931 -- # uname 00:07:47.214 20:50:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:47.214 20:50:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103359 00:07:47.214 20:50:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:47.214 20:50:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:47.214 killing process with pid 103359 00:07:47.214 20:50:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103359' 00:07:47.214 20:50:15 -- common/autotest_common.sh@945 -- # kill 103359 00:07:47.214 20:50:15 -- common/autotest_common.sh@950 -- # wait 103359 00:07:49.116 ************************************ 00:07:49.116 END TEST spdkcli_tcp 00:07:49.116 ************************************ 00:07:49.116 00:07:49.116 real 0m3.957s 00:07:49.116 user 0m7.342s 00:07:49.116 sys 0m0.536s 00:07:49.116 20:50:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.116 20:50:17 -- common/autotest_common.sh@10 -- # set +x 00:07:49.116 20:50:17 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:49.116 20:50:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.116 20:50:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.116 20:50:17 -- common/autotest_common.sh@10 -- # set +x 00:07:49.116 ************************************ 00:07:49.116 START TEST dpdk_mem_utility 00:07:49.116 ************************************ 00:07:49.116 20:50:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:49.116 * Looking for test storage... 00:07:49.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:49.116 20:50:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:49.116 20:50:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=103485 00:07:49.116 20:50:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:49.116 20:50:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 103485 00:07:49.116 20:50:17 -- common/autotest_common.sh@819 -- # '[' -z 103485 ']' 00:07:49.116 20:50:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.116 20:50:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.116 20:50:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.116 20:50:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.116 20:50:17 -- common/autotest_common.sh@10 -- # set +x 00:07:49.372 [2024-06-09 20:50:17.310095] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:49.373 [2024-06-09 20:50:17.310519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103485 ] 00:07:49.373 [2024-06-09 20:50:17.484968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.629 [2024-06-09 20:50:17.758810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.629 [2024-06-09 20:50:17.759054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.004 20:50:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:51.004 20:50:18 -- common/autotest_common.sh@852 -- # return 0 00:07:51.004 20:50:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:51.004 20:50:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:51.004 20:50:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.004 20:50:18 -- common/autotest_common.sh@10 -- # set +x 00:07:51.004 { 00:07:51.004 "filename": "/tmp/spdk_mem_dump.txt" 00:07:51.004 } 00:07:51.004 20:50:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.004 20:50:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:51.004 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:51.004 1 heaps totaling size 820.000000 MiB 00:07:51.004 size: 820.000000 MiB heap id: 0 00:07:51.004 end heaps---------- 00:07:51.004 8 mempools totaling size 598.116089 MiB 00:07:51.004 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:51.004 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:51.004 size: 84.521057 MiB name: bdev_io_103485 00:07:51.004 size: 51.011292 MiB name: evtpool_103485 00:07:51.004 size: 50.003479 MiB name: msgpool_103485 00:07:51.004 size: 21.763794 MiB name: PDU_Pool 00:07:51.004 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:51.004 size: 0.026123 MiB name: Session_Pool 00:07:51.004 end mempools------- 00:07:51.004 6 memzones totaling size 4.142822 MiB 00:07:51.004 size: 1.000366 MiB name: RG_ring_0_103485 00:07:51.004 size: 1.000366 MiB name: RG_ring_1_103485 00:07:51.004 size: 1.000366 MiB name: RG_ring_4_103485 00:07:51.004 size: 1.000366 MiB name: RG_ring_5_103485 00:07:51.004 size: 0.125366 MiB name: RG_ring_2_103485 00:07:51.004 size: 0.015991 MiB name: RG_ring_3_103485 00:07:51.004 end memzones------- 00:07:51.004 20:50:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:51.004 heap id: 0 total size: 820.000000 MiB number of busy elements: 220 number of free elements: 18 00:07:51.004 list of free elements. size: 18.471191 MiB 00:07:51.004 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:51.004 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:51.004 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:51.004 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:51.004 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:51.004 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:51.004 element at address: 0x200019600000 with size: 0.999329 MiB 00:07:51.004 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:51.004 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:51.004 element at address: 0x200018e00000 with size: 0.959656 MiB 00:07:51.004 element at address: 0x200019900040 with size: 0.937256 MiB 00:07:51.004 element at address: 0x200000200000 with size: 0.835083 MiB 00:07:51.004 element at address: 0x20001b000000 with size: 0.562927 MiB 00:07:51.004 element at address: 0x200019200000 with size: 0.489197 MiB 00:07:51.004 element at address: 0x200019a00000 with size: 0.485413 MiB 00:07:51.004 element at address: 0x200013800000 with size: 0.468140 MiB 00:07:51.004 element at address: 0x200028400000 with size: 0.399475 MiB 00:07:51.004 element at address: 0x200003a00000 with size: 0.356140 MiB 00:07:51.004 list of standard malloc elements. size: 199.264404 MiB 00:07:51.004 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:51.004 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:51.004 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:51.004 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:51.004 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:51.004 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:51.004 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:07:51.004 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:51.004 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:07:51.004 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:07:51.004 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:51.004 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:51.004 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:51.004 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:51.004 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:51.004 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:51.004 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013877d80 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013877e80 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013877f80 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013878080 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013878180 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013878280 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013878380 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013878480 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200013878580 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:51.005 element at address: 0x200019abc680 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:07:51.005 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:07:51.006 element at address: 0x200028466440 with size: 0.000244 MiB 00:07:51.006 element at address: 0x200028466540 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846d200 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846d480 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846d580 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846d680 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846d780 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846d880 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846d980 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846da80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846db80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846de80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846df80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e080 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e180 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e280 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e380 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e480 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e580 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e680 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e780 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e880 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846e980 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f080 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f180 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f280 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f380 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f480 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f580 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f680 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f780 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f880 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846f980 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:07:51.006 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:07:51.006 list of memzone associated elements. size: 602.264404 MiB 00:07:51.006 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:51.006 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:51.006 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:51.006 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:51.006 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:51.006 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_103485_0 00:07:51.006 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:51.006 associated memzone info: size: 48.002930 MiB name: MP_evtpool_103485_0 00:07:51.006 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:51.006 associated memzone info: size: 48.002930 MiB name: MP_msgpool_103485_0 00:07:51.006 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:51.006 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:51.006 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:51.006 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:51.006 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:51.006 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_103485 00:07:51.006 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:51.006 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_103485 00:07:51.006 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:51.006 associated memzone info: size: 1.007996 MiB name: MP_evtpool_103485 00:07:51.006 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:51.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:51.006 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:51.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:51.006 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:51.006 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:51.006 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:51.006 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:51.006 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:51.006 associated memzone info: size: 1.000366 MiB name: RG_ring_0_103485 00:07:51.006 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:51.006 associated memzone info: size: 1.000366 MiB name: RG_ring_1_103485 00:07:51.006 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:51.006 associated memzone info: size: 1.000366 MiB name: RG_ring_4_103485 00:07:51.006 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:51.006 associated memzone info: size: 1.000366 MiB name: RG_ring_5_103485 00:07:51.006 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:51.006 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_103485 00:07:51.006 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:07:51.006 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:51.006 element at address: 0x200013878680 with size: 0.500549 MiB 00:07:51.006 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:51.006 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:07:51.006 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:51.006 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:51.006 associated memzone info: size: 0.125366 MiB name: RG_ring_2_103485 00:07:51.006 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:07:51.006 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:51.006 element at address: 0x200028466640 with size: 0.023804 MiB 00:07:51.006 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:51.006 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:51.006 associated memzone info: size: 0.015991 MiB name: RG_ring_3_103485 00:07:51.006 element at address: 0x20002846c7c0 with size: 0.002502 MiB 00:07:51.006 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:51.006 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:07:51.006 associated memzone info: size: 0.000183 MiB name: MP_msgpool_103485 00:07:51.006 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:51.006 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_103485 00:07:51.006 element at address: 0x20002846d300 with size: 0.000366 MiB 00:07:51.006 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:51.006 20:50:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:51.006 20:50:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 103485 00:07:51.006 20:50:19 -- common/autotest_common.sh@926 -- # '[' -z 103485 ']' 00:07:51.006 20:50:19 -- common/autotest_common.sh@930 -- # kill -0 103485 00:07:51.006 20:50:19 -- common/autotest_common.sh@931 -- # uname 00:07:51.006 20:50:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:51.006 20:50:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103485 00:07:51.006 20:50:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:51.006 20:50:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:51.006 killing process with pid 103485 00:07:51.006 20:50:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103485' 00:07:51.006 20:50:19 -- common/autotest_common.sh@945 -- # kill 103485 00:07:51.006 20:50:19 -- common/autotest_common.sh@950 -- # wait 103485 00:07:52.958 00:07:52.958 real 0m3.732s 00:07:52.958 user 0m3.904s 00:07:52.958 sys 0m0.532s 00:07:52.958 20:50:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.958 20:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.958 ************************************ 00:07:52.958 END TEST dpdk_mem_utility 00:07:52.958 ************************************ 00:07:52.958 20:50:20 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:52.958 20:50:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:52.958 20:50:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.958 20:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.958 ************************************ 00:07:52.958 START TEST event 00:07:52.958 ************************************ 00:07:52.958 20:50:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:52.958 * Looking for test storage... 00:07:52.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:52.958 20:50:21 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:52.958 20:50:21 -- bdev/nbd_common.sh@6 -- # set -e 00:07:52.958 20:50:21 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:52.958 20:50:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:52.958 20:50:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.958 20:50:21 -- common/autotest_common.sh@10 -- # set +x 00:07:52.958 ************************************ 00:07:52.958 START TEST event_perf 00:07:52.958 ************************************ 00:07:52.958 20:50:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:52.958 Running I/O for 1 seconds...[2024-06-09 20:50:21.071774] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:52.958 [2024-06-09 20:50:21.072337] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103601 ] 00:07:53.216 [2024-06-09 20:50:21.245714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.475 [2024-06-09 20:50:21.431811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.475 [2024-06-09 20:50:21.431966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.475 [2024-06-09 20:50:21.432090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.475 [2024-06-09 20:50:21.432096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.851 Running I/O for 1 seconds... 00:07:54.851 lcore 0: 217798 00:07:54.851 lcore 1: 217797 00:07:54.851 lcore 2: 217796 00:07:54.851 lcore 3: 217795 00:07:54.851 done. 00:07:54.851 00:07:54.851 real 0m1.719s 00:07:54.851 user 0m4.478s 00:07:54.851 sys 0m0.128s 00:07:54.851 20:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.851 20:50:22 -- common/autotest_common.sh@10 -- # set +x 00:07:54.851 ************************************ 00:07:54.851 END TEST event_perf 00:07:54.851 ************************************ 00:07:54.851 20:50:22 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:54.851 20:50:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.851 20:50:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.851 20:50:22 -- common/autotest_common.sh@10 -- # set +x 00:07:54.851 ************************************ 00:07:54.851 START TEST event_reactor 00:07:54.851 ************************************ 00:07:54.851 20:50:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:54.851 [2024-06-09 20:50:22.848409] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:54.852 [2024-06-09 20:50:22.848617] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103649 ] 00:07:54.852 [2024-06-09 20:50:23.017019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.111 [2024-06-09 20:50:23.224377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.486 test_start 00:07:56.486 oneshot 00:07:56.486 tick 100 00:07:56.486 tick 100 00:07:56.486 tick 250 00:07:56.486 tick 100 00:07:56.486 tick 100 00:07:56.486 tick 100 00:07:56.486 tick 250 00:07:56.486 tick 500 00:07:56.486 tick 100 00:07:56.486 tick 100 00:07:56.486 tick 250 00:07:56.486 tick 100 00:07:56.486 tick 100 00:07:56.486 test_end 00:07:56.486 00:07:56.486 real 0m1.748s 00:07:56.486 user 0m1.560s 00:07:56.486 sys 0m0.088s 00:07:56.486 20:50:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.486 20:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:56.486 ************************************ 00:07:56.486 END TEST event_reactor 00:07:56.486 ************************************ 00:07:56.486 20:50:24 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:56.486 20:50:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:56.486 20:50:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.486 20:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:56.486 ************************************ 00:07:56.486 START TEST event_reactor_perf 00:07:56.486 ************************************ 00:07:56.486 20:50:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:56.486 [2024-06-09 20:50:24.639705] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:56.486 [2024-06-09 20:50:24.639893] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103692 ] 00:07:56.744 [2024-06-09 20:50:24.794335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.002 [2024-06-09 20:50:24.959787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.379 test_start 00:07:58.379 test_end 00:07:58.379 Performance: 355264 events per second 00:07:58.379 00:07:58.379 real 0m1.760s 00:07:58.379 user 0m1.548s 00:07:58.379 sys 0m0.112s 00:07:58.379 20:50:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.379 20:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:58.379 ************************************ 00:07:58.379 END TEST event_reactor_perf 00:07:58.379 ************************************ 00:07:58.379 20:50:26 -- event/event.sh@49 -- # uname -s 00:07:58.379 20:50:26 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:58.379 20:50:26 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:58.379 20:50:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.379 20:50:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.379 20:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:58.379 ************************************ 00:07:58.379 START TEST event_scheduler 00:07:58.379 ************************************ 00:07:58.379 20:50:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:58.379 * Looking for test storage... 00:07:58.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:58.379 20:50:26 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:58.379 20:50:26 -- scheduler/scheduler.sh@35 -- # scheduler_pid=103768 00:07:58.379 20:50:26 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:58.379 20:50:26 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:58.379 20:50:26 -- scheduler/scheduler.sh@37 -- # waitforlisten 103768 00:07:58.379 20:50:26 -- common/autotest_common.sh@819 -- # '[' -z 103768 ']' 00:07:58.379 20:50:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.379 20:50:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.379 20:50:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.379 20:50:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.379 20:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:58.638 [2024-06-09 20:50:26.593791] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:58.638 [2024-06-09 20:50:26.594040] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103768 ] 00:07:58.638 [2024-06-09 20:50:26.790000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.895 [2024-06-09 20:50:27.045609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.895 [2024-06-09 20:50:27.045721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.895 [2024-06-09 20:50:27.045793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.895 [2024-06-09 20:50:27.045802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.460 20:50:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.460 20:50:27 -- common/autotest_common.sh@852 -- # return 0 00:07:59.460 20:50:27 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:59.460 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.460 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.460 POWER: Env isn't set yet! 00:07:59.460 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:59.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:59.460 POWER: Cannot set governor of lcore 0 to userspace 00:07:59.460 POWER: Attempting to initialise PSTAT power management... 00:07:59.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:59.460 POWER: Cannot set governor of lcore 0 to performance 00:07:59.460 POWER: Attempting to initialise AMD PSTATE power management... 00:07:59.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:59.460 POWER: Cannot set governor of lcore 0 to userspace 00:07:59.460 POWER: Attempting to initialise CPPC power management... 00:07:59.460 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:59.460 POWER: Cannot set governor of lcore 0 to userspace 00:07:59.460 POWER: Attempting to initialise VM power management... 00:07:59.460 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:59.460 POWER: Unable to set Power Management Environment for lcore 0 00:07:59.460 [2024-06-09 20:50:27.567994] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:59.460 [2024-06-09 20:50:27.568286] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:59.460 [2024-06-09 20:50:27.568514] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:59.460 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.460 20:50:27 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:59.460 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.460 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.718 [2024-06-09 20:50:27.867343] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:59.718 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.718 20:50:27 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:59.718 20:50:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.718 20:50:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.718 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.718 ************************************ 00:07:59.718 START TEST scheduler_create_thread 00:07:59.718 ************************************ 00:07:59.718 20:50:27 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:07:59.718 20:50:27 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:59.718 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.718 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.718 2 00:07:59.718 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 3 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 4 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 5 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 6 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 7 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 8 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 9 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 10 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.976 20:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.976 20:50:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:59.976 20:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.976 20:50:27 -- common/autotest_common.sh@10 -- # set +x 00:08:00.910 20:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.910 20:50:28 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:00.910 20:50:28 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:00.910 20:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.910 20:50:28 -- common/autotest_common.sh@10 -- # set +x 00:08:01.892 ************************************ 00:08:01.892 END TEST scheduler_create_thread 00:08:01.892 ************************************ 00:08:01.892 20:50:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:01.892 00:08:01.892 real 0m2.145s 00:08:01.892 user 0m0.016s 00:08:01.892 sys 0m0.004s 00:08:01.892 20:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.892 20:50:30 -- common/autotest_common.sh@10 -- # set +x 00:08:02.151 20:50:30 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:02.151 20:50:30 -- scheduler/scheduler.sh@46 -- # killprocess 103768 00:08:02.151 20:50:30 -- common/autotest_common.sh@926 -- # '[' -z 103768 ']' 00:08:02.151 20:50:30 -- common/autotest_common.sh@930 -- # kill -0 103768 00:08:02.151 20:50:30 -- common/autotest_common.sh@931 -- # uname 00:08:02.151 20:50:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:02.151 20:50:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103768 00:08:02.151 20:50:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:02.151 20:50:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:02.151 20:50:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103768' 00:08:02.151 killing process with pid 103768 00:08:02.151 20:50:30 -- common/autotest_common.sh@945 -- # kill 103768 00:08:02.151 20:50:30 -- common/autotest_common.sh@950 -- # wait 103768 00:08:02.408 [2024-06-09 20:50:30.505360] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:03.782 ************************************ 00:08:03.782 END TEST event_scheduler 00:08:03.782 ************************************ 00:08:03.782 00:08:03.782 real 0m5.256s 00:08:03.782 user 0m8.623s 00:08:03.782 sys 0m0.486s 00:08:03.782 20:50:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.782 20:50:31 -- common/autotest_common.sh@10 -- # set +x 00:08:03.782 20:50:31 -- event/event.sh@51 -- # modprobe -n nbd 00:08:03.782 20:50:31 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:03.782 20:50:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.782 20:50:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.782 20:50:31 -- common/autotest_common.sh@10 -- # set +x 00:08:03.782 ************************************ 00:08:03.782 START TEST app_repeat 00:08:03.782 ************************************ 00:08:03.782 20:50:31 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:08:03.782 20:50:31 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.782 20:50:31 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.782 20:50:31 -- event/event.sh@13 -- # local nbd_list 00:08:03.782 20:50:31 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.782 20:50:31 -- event/event.sh@14 -- # local bdev_list 00:08:03.782 20:50:31 -- event/event.sh@15 -- # local repeat_times=4 00:08:03.782 20:50:31 -- event/event.sh@17 -- # modprobe nbd 00:08:03.782 20:50:31 -- event/event.sh@19 -- # repeat_pid=103898 00:08:03.782 20:50:31 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:03.782 20:50:31 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.782 Process app_repeat pid: 103898 00:08:03.782 20:50:31 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 103898' 00:08:03.782 20:50:31 -- event/event.sh@23 -- # for i in {0..2} 00:08:03.782 spdk_app_start Round 0 00:08:03.782 20:50:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:03.782 20:50:31 -- event/event.sh@25 -- # waitforlisten 103898 /var/tmp/spdk-nbd.sock 00:08:03.782 20:50:31 -- common/autotest_common.sh@819 -- # '[' -z 103898 ']' 00:08:03.782 20:50:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:03.782 20:50:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:03.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:03.782 20:50:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:03.782 20:50:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:03.782 20:50:31 -- common/autotest_common.sh@10 -- # set +x 00:08:03.782 [2024-06-09 20:50:31.793863] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:03.782 [2024-06-09 20:50:31.794078] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103898 ] 00:08:04.055 [2024-06-09 20:50:31.977236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.055 [2024-06-09 20:50:32.226690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.056 [2024-06-09 20:50:32.226695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.642 20:50:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.642 20:50:32 -- common/autotest_common.sh@852 -- # return 0 00:08:04.642 20:50:32 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:05.209 Malloc0 00:08:05.209 20:50:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:05.468 Malloc1 00:08:05.468 20:50:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@12 -- # local i 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.468 20:50:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:05.725 /dev/nbd0 00:08:05.726 20:50:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:05.726 20:50:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:05.726 20:50:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:05.726 20:50:33 -- common/autotest_common.sh@857 -- # local i 00:08:05.726 20:50:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:05.726 20:50:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:05.726 20:50:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:05.726 20:50:33 -- common/autotest_common.sh@861 -- # break 00:08:05.726 20:50:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:05.726 20:50:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:05.726 20:50:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:05.726 1+0 records in 00:08:05.726 1+0 records out 00:08:05.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383392 s, 10.7 MB/s 00:08:05.726 20:50:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:05.726 20:50:33 -- common/autotest_common.sh@874 -- # size=4096 00:08:05.726 20:50:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:05.726 20:50:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:05.726 20:50:33 -- common/autotest_common.sh@877 -- # return 0 00:08:05.726 20:50:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.726 20:50:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.726 20:50:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:05.984 /dev/nbd1 00:08:05.984 20:50:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:05.984 20:50:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:05.984 20:50:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:05.984 20:50:34 -- common/autotest_common.sh@857 -- # local i 00:08:05.984 20:50:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:05.984 20:50:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:05.984 20:50:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:05.984 20:50:34 -- common/autotest_common.sh@861 -- # break 00:08:05.984 20:50:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:05.984 20:50:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:05.984 20:50:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:05.984 1+0 records in 00:08:05.984 1+0 records out 00:08:05.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643371 s, 6.4 MB/s 00:08:05.984 20:50:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:05.984 20:50:34 -- common/autotest_common.sh@874 -- # size=4096 00:08:05.984 20:50:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.241 20:50:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:06.241 20:50:34 -- common/autotest_common.sh@877 -- # return 0 00:08:06.241 20:50:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.241 20:50:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:06.241 20:50:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.241 20:50:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.241 20:50:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.241 20:50:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.241 { 00:08:06.241 "nbd_device": "/dev/nbd0", 00:08:06.241 "bdev_name": "Malloc0" 00:08:06.241 }, 00:08:06.241 { 00:08:06.241 "nbd_device": "/dev/nbd1", 00:08:06.241 "bdev_name": "Malloc1" 00:08:06.241 } 00:08:06.241 ]' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.500 { 00:08:06.500 "nbd_device": "/dev/nbd0", 00:08:06.500 "bdev_name": "Malloc0" 00:08:06.500 }, 00:08:06.500 { 00:08:06.500 "nbd_device": "/dev/nbd1", 00:08:06.500 "bdev_name": "Malloc1" 00:08:06.500 } 00:08:06.500 ]' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.500 /dev/nbd1' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.500 /dev/nbd1' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@65 -- # count=2 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@95 -- # count=2 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:06.500 256+0 records in 00:08:06.500 256+0 records out 00:08:06.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106895 s, 98.1 MB/s 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.500 256+0 records in 00:08:06.500 256+0 records out 00:08:06.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246988 s, 42.5 MB/s 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.500 256+0 records in 00:08:06.500 256+0 records out 00:08:06.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264797 s, 39.6 MB/s 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@51 -- # local i 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.500 20:50:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@41 -- # break 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.759 20:50:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@41 -- # break 00:08:07.017 20:50:35 -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.018 20:50:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.018 20:50:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.018 20:50:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:07.275 20:50:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@65 -- # true 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@65 -- # count=0 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@104 -- # count=0 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:07.276 20:50:35 -- bdev/nbd_common.sh@109 -- # return 0 00:08:07.276 20:50:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:07.842 20:50:35 -- event/event.sh@35 -- # sleep 3 00:08:08.776 [2024-06-09 20:50:36.768237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.776 [2024-06-09 20:50:36.922266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.776 [2024-06-09 20:50:36.922280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.033 [2024-06-09 20:50:37.095164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:09.033 [2024-06-09 20:50:37.095312] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:10.935 20:50:38 -- event/event.sh@23 -- # for i in {0..2} 00:08:10.935 spdk_app_start Round 1 00:08:10.935 20:50:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:10.935 20:50:38 -- event/event.sh@25 -- # waitforlisten 103898 /var/tmp/spdk-nbd.sock 00:08:10.935 20:50:38 -- common/autotest_common.sh@819 -- # '[' -z 103898 ']' 00:08:10.935 20:50:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:10.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:10.935 20:50:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:10.935 20:50:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:10.935 20:50:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:10.935 20:50:38 -- common/autotest_common.sh@10 -- # set +x 00:08:10.935 20:50:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:10.935 20:50:39 -- common/autotest_common.sh@852 -- # return 0 00:08:10.935 20:50:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:11.193 Malloc0 00:08:11.193 20:50:39 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:11.451 Malloc1 00:08:11.451 20:50:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@12 -- # local i 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.451 20:50:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:11.709 /dev/nbd0 00:08:11.709 20:50:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:11.709 20:50:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:11.709 20:50:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:11.709 20:50:39 -- common/autotest_common.sh@857 -- # local i 00:08:11.709 20:50:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:11.709 20:50:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:11.709 20:50:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:11.709 20:50:39 -- common/autotest_common.sh@861 -- # break 00:08:11.709 20:50:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:11.709 20:50:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:11.709 20:50:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:11.709 1+0 records in 00:08:11.709 1+0 records out 00:08:11.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238027 s, 17.2 MB/s 00:08:11.709 20:50:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.709 20:50:39 -- common/autotest_common.sh@874 -- # size=4096 00:08:11.709 20:50:39 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.709 20:50:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:11.709 20:50:39 -- common/autotest_common.sh@877 -- # return 0 00:08:11.709 20:50:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.709 20:50:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.709 20:50:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:11.966 /dev/nbd1 00:08:11.966 20:50:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:11.966 20:50:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:11.966 20:50:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:11.966 20:50:40 -- common/autotest_common.sh@857 -- # local i 00:08:11.966 20:50:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:11.966 20:50:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:11.966 20:50:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:11.966 20:50:40 -- common/autotest_common.sh@861 -- # break 00:08:11.966 20:50:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:11.966 20:50:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:11.966 20:50:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:11.966 1+0 records in 00:08:11.966 1+0 records out 00:08:11.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276955 s, 14.8 MB/s 00:08:11.966 20:50:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.966 20:50:40 -- common/autotest_common.sh@874 -- # size=4096 00:08:11.966 20:50:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.966 20:50:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:11.966 20:50:40 -- common/autotest_common.sh@877 -- # return 0 00:08:11.966 20:50:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.966 20:50:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.966 20:50:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.966 20:50:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.966 20:50:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:12.225 20:50:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:12.225 { 00:08:12.225 "nbd_device": "/dev/nbd0", 00:08:12.225 "bdev_name": "Malloc0" 00:08:12.225 }, 00:08:12.225 { 00:08:12.225 "nbd_device": "/dev/nbd1", 00:08:12.225 "bdev_name": "Malloc1" 00:08:12.225 } 00:08:12.225 ]' 00:08:12.225 20:50:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.225 20:50:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:12.225 { 00:08:12.225 "nbd_device": "/dev/nbd0", 00:08:12.225 "bdev_name": "Malloc0" 00:08:12.225 }, 00:08:12.225 { 00:08:12.225 "nbd_device": "/dev/nbd1", 00:08:12.225 "bdev_name": "Malloc1" 00:08:12.225 } 00:08:12.225 ]' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:12.489 /dev/nbd1' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:12.489 /dev/nbd1' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@65 -- # count=2 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@95 -- # count=2 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:12.489 256+0 records in 00:08:12.489 256+0 records out 00:08:12.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763995 s, 137 MB/s 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:12.489 256+0 records in 00:08:12.489 256+0 records out 00:08:12.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222286 s, 47.2 MB/s 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:12.489 256+0 records in 00:08:12.489 256+0 records out 00:08:12.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277932 s, 37.7 MB/s 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@51 -- # local i 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.489 20:50:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@41 -- # break 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.757 20:50:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@41 -- # break 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.015 20:50:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@65 -- # true 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@104 -- # count=0 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:13.273 20:50:41 -- bdev/nbd_common.sh@109 -- # return 0 00:08:13.273 20:50:41 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:13.839 20:50:41 -- event/event.sh@35 -- # sleep 3 00:08:15.213 [2024-06-09 20:50:43.026350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.213 [2024-06-09 20:50:43.241046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.213 [2024-06-09 20:50:43.241057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.472 [2024-06-09 20:50:43.436067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:15.472 [2024-06-09 20:50:43.436381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:16.847 spdk_app_start Round 2 00:08:16.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:16.847 20:50:44 -- event/event.sh@23 -- # for i in {0..2} 00:08:16.847 20:50:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:16.847 20:50:44 -- event/event.sh@25 -- # waitforlisten 103898 /var/tmp/spdk-nbd.sock 00:08:16.847 20:50:44 -- common/autotest_common.sh@819 -- # '[' -z 103898 ']' 00:08:16.847 20:50:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:16.847 20:50:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:16.847 20:50:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:16.847 20:50:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:16.847 20:50:44 -- common/autotest_common.sh@10 -- # set +x 00:08:17.106 20:50:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:17.106 20:50:45 -- common/autotest_common.sh@852 -- # return 0 00:08:17.106 20:50:45 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.365 Malloc0 00:08:17.365 20:50:45 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.624 Malloc1 00:08:17.624 20:50:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@12 -- # local i 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.624 20:50:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:17.882 /dev/nbd0 00:08:17.882 20:50:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:17.883 20:50:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:17.883 20:50:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:17.883 20:50:46 -- common/autotest_common.sh@857 -- # local i 00:08:17.883 20:50:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:17.883 20:50:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:17.883 20:50:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:17.883 20:50:46 -- common/autotest_common.sh@861 -- # break 00:08:17.883 20:50:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:17.883 20:50:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:17.883 20:50:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:17.883 1+0 records in 00:08:17.883 1+0 records out 00:08:17.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627435 s, 6.5 MB/s 00:08:17.883 20:50:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:17.883 20:50:46 -- common/autotest_common.sh@874 -- # size=4096 00:08:17.883 20:50:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:17.883 20:50:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:17.883 20:50:46 -- common/autotest_common.sh@877 -- # return 0 00:08:17.883 20:50:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.883 20:50:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.883 20:50:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:18.141 /dev/nbd1 00:08:18.399 20:50:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:18.399 20:50:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:18.399 20:50:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:18.399 20:50:46 -- common/autotest_common.sh@857 -- # local i 00:08:18.399 20:50:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:18.399 20:50:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:18.399 20:50:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:18.399 20:50:46 -- common/autotest_common.sh@861 -- # break 00:08:18.399 20:50:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:18.399 20:50:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:18.399 20:50:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.399 1+0 records in 00:08:18.399 1+0 records out 00:08:18.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663207 s, 6.2 MB/s 00:08:18.399 20:50:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.399 20:50:46 -- common/autotest_common.sh@874 -- # size=4096 00:08:18.399 20:50:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.399 20:50:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:18.399 20:50:46 -- common/autotest_common.sh@877 -- # return 0 00:08:18.399 20:50:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.399 20:50:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.399 20:50:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.399 20:50:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.399 20:50:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.657 20:50:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:18.657 { 00:08:18.657 "nbd_device": "/dev/nbd0", 00:08:18.657 "bdev_name": "Malloc0" 00:08:18.657 }, 00:08:18.657 { 00:08:18.658 "nbd_device": "/dev/nbd1", 00:08:18.658 "bdev_name": "Malloc1" 00:08:18.658 } 00:08:18.658 ]' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:18.658 { 00:08:18.658 "nbd_device": "/dev/nbd0", 00:08:18.658 "bdev_name": "Malloc0" 00:08:18.658 }, 00:08:18.658 { 00:08:18.658 "nbd_device": "/dev/nbd1", 00:08:18.658 "bdev_name": "Malloc1" 00:08:18.658 } 00:08:18.658 ]' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:18.658 /dev/nbd1' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:18.658 /dev/nbd1' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@65 -- # count=2 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@95 -- # count=2 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:18.658 256+0 records in 00:08:18.658 256+0 records out 00:08:18.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00930914 s, 113 MB/s 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:18.658 256+0 records in 00:08:18.658 256+0 records out 00:08:18.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230747 s, 45.4 MB/s 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:18.658 256+0 records in 00:08:18.658 256+0 records out 00:08:18.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290009 s, 36.2 MB/s 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@51 -- # local i 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:18.658 20:50:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@41 -- # break 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:18.916 20:50:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@41 -- # break 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.175 20:50:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.432 20:50:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:19.432 20:50:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:19.432 20:50:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@65 -- # true 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@65 -- # count=0 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@104 -- # count=0 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:19.691 20:50:47 -- bdev/nbd_common.sh@109 -- # return 0 00:08:19.691 20:50:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:19.949 20:50:48 -- event/event.sh@35 -- # sleep 3 00:08:21.326 [2024-06-09 20:50:49.365594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:21.584 [2024-06-09 20:50:49.570544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.584 [2024-06-09 20:50:49.570552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.843 [2024-06-09 20:50:49.772080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:21.843 [2024-06-09 20:50:49.772463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:23.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.244 20:50:51 -- event/event.sh@38 -- # waitforlisten 103898 /var/tmp/spdk-nbd.sock 00:08:23.244 20:50:51 -- common/autotest_common.sh@819 -- # '[' -z 103898 ']' 00:08:23.244 20:50:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.244 20:50:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:23.244 20:50:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.244 20:50:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:23.244 20:50:51 -- common/autotest_common.sh@10 -- # set +x 00:08:23.244 20:50:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:23.244 20:50:51 -- common/autotest_common.sh@852 -- # return 0 00:08:23.244 20:50:51 -- event/event.sh@39 -- # killprocess 103898 00:08:23.244 20:50:51 -- common/autotest_common.sh@926 -- # '[' -z 103898 ']' 00:08:23.244 20:50:51 -- common/autotest_common.sh@930 -- # kill -0 103898 00:08:23.244 20:50:51 -- common/autotest_common.sh@931 -- # uname 00:08:23.244 20:50:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.244 20:50:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103898 00:08:23.244 20:50:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.244 20:50:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.244 20:50:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103898' 00:08:23.244 killing process with pid 103898 00:08:23.244 20:50:51 -- common/autotest_common.sh@945 -- # kill 103898 00:08:23.244 20:50:51 -- common/autotest_common.sh@950 -- # wait 103898 00:08:24.180 spdk_app_start is called in Round 0. 00:08:24.180 Shutdown signal received, stop current app iteration 00:08:24.180 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:24.180 spdk_app_start is called in Round 1. 00:08:24.180 Shutdown signal received, stop current app iteration 00:08:24.180 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:24.180 spdk_app_start is called in Round 2. 00:08:24.180 Shutdown signal received, stop current app iteration 00:08:24.180 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:24.180 spdk_app_start is called in Round 3. 00:08:24.180 Shutdown signal received, stop current app iteration 00:08:24.180 ************************************ 00:08:24.180 END TEST app_repeat 00:08:24.180 ************************************ 00:08:24.180 20:50:52 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:24.180 20:50:52 -- event/event.sh@42 -- # return 0 00:08:24.180 00:08:24.180 real 0m20.617s 00:08:24.180 user 0m44.580s 00:08:24.180 sys 0m2.809s 00:08:24.180 20:50:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.180 20:50:52 -- common/autotest_common.sh@10 -- # set +x 00:08:24.438 20:50:52 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:24.438 20:50:52 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:24.438 20:50:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.438 20:50:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.438 20:50:52 -- common/autotest_common.sh@10 -- # set +x 00:08:24.438 ************************************ 00:08:24.438 START TEST cpu_locks 00:08:24.438 ************************************ 00:08:24.438 20:50:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:24.438 * Looking for test storage... 00:08:24.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:24.438 20:50:52 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:24.438 20:50:52 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:24.438 20:50:52 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:24.438 20:50:52 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:24.438 20:50:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:24.438 20:50:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.438 20:50:52 -- common/autotest_common.sh@10 -- # set +x 00:08:24.438 ************************************ 00:08:24.438 START TEST default_locks 00:08:24.438 ************************************ 00:08:24.438 20:50:52 -- common/autotest_common.sh@1104 -- # default_locks 00:08:24.438 20:50:52 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=104426 00:08:24.438 20:50:52 -- event/cpu_locks.sh@47 -- # waitforlisten 104426 00:08:24.438 20:50:52 -- common/autotest_common.sh@819 -- # '[' -z 104426 ']' 00:08:24.438 20:50:52 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:24.438 20:50:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.438 20:50:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:24.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.438 20:50:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.438 20:50:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:24.438 20:50:52 -- common/autotest_common.sh@10 -- # set +x 00:08:24.438 [2024-06-09 20:50:52.566186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:24.438 [2024-06-09 20:50:52.566416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104426 ] 00:08:24.697 [2024-06-09 20:50:52.744074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.956 [2024-06-09 20:50:52.993211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.956 [2024-06-09 20:50:52.993548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.331 20:50:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:26.331 20:50:54 -- common/autotest_common.sh@852 -- # return 0 00:08:26.331 20:50:54 -- event/cpu_locks.sh@49 -- # locks_exist 104426 00:08:26.331 20:50:54 -- event/cpu_locks.sh@22 -- # lslocks -p 104426 00:08:26.331 20:50:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:26.589 20:50:54 -- event/cpu_locks.sh@50 -- # killprocess 104426 00:08:26.589 20:50:54 -- common/autotest_common.sh@926 -- # '[' -z 104426 ']' 00:08:26.589 20:50:54 -- common/autotest_common.sh@930 -- # kill -0 104426 00:08:26.589 20:50:54 -- common/autotest_common.sh@931 -- # uname 00:08:26.589 20:50:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:26.589 20:50:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104426 00:08:26.589 20:50:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:26.589 killing process with pid 104426 00:08:26.589 20:50:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:26.589 20:50:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104426' 00:08:26.589 20:50:54 -- common/autotest_common.sh@945 -- # kill 104426 00:08:26.589 20:50:54 -- common/autotest_common.sh@950 -- # wait 104426 00:08:29.117 20:50:56 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 104426 00:08:29.117 20:50:56 -- common/autotest_common.sh@640 -- # local es=0 00:08:29.117 20:50:56 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 104426 00:08:29.117 20:50:56 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:29.117 20:50:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:29.117 20:50:56 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:29.117 20:50:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:29.117 20:50:56 -- common/autotest_common.sh@643 -- # waitforlisten 104426 00:08:29.117 20:50:56 -- common/autotest_common.sh@819 -- # '[' -z 104426 ']' 00:08:29.117 20:50:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.117 20:50:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:29.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.117 20:50:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.117 20:50:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:29.117 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:08:29.117 ERROR: process (pid: 104426) is no longer running 00:08:29.117 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (104426) - No such process 00:08:29.117 20:50:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:29.117 20:50:56 -- common/autotest_common.sh@852 -- # return 1 00:08:29.117 20:50:56 -- common/autotest_common.sh@643 -- # es=1 00:08:29.117 20:50:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:29.117 20:50:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:29.117 20:50:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:29.117 20:50:56 -- event/cpu_locks.sh@54 -- # no_locks 00:08:29.117 20:50:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:29.117 20:50:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:29.117 20:50:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:29.117 00:08:29.117 real 0m4.398s 00:08:29.117 user 0m4.596s 00:08:29.117 sys 0m0.650s 00:08:29.117 20:50:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.117 ************************************ 00:08:29.117 END TEST default_locks 00:08:29.117 ************************************ 00:08:29.117 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:08:29.117 20:50:56 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:29.117 20:50:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.117 20:50:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.117 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:08:29.117 ************************************ 00:08:29.117 START TEST default_locks_via_rpc 00:08:29.117 ************************************ 00:08:29.117 20:50:56 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:08:29.117 20:50:56 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=104513 00:08:29.117 20:50:56 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:29.117 20:50:56 -- event/cpu_locks.sh@63 -- # waitforlisten 104513 00:08:29.117 20:50:56 -- common/autotest_common.sh@819 -- # '[' -z 104513 ']' 00:08:29.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.117 20:50:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.117 20:50:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:29.117 20:50:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.117 20:50:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:29.117 20:50:56 -- common/autotest_common.sh@10 -- # set +x 00:08:29.117 [2024-06-09 20:50:57.013231] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:29.117 [2024-06-09 20:50:57.013422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104513 ] 00:08:29.117 [2024-06-09 20:50:57.176553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.375 [2024-06-09 20:50:57.380997] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:29.375 [2024-06-09 20:50:57.381314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.749 20:50:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:30.749 20:50:58 -- common/autotest_common.sh@852 -- # return 0 00:08:30.749 20:50:58 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:30.749 20:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.749 20:50:58 -- common/autotest_common.sh@10 -- # set +x 00:08:30.749 20:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.749 20:50:58 -- event/cpu_locks.sh@67 -- # no_locks 00:08:30.749 20:50:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:30.749 20:50:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:30.749 20:50:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:30.749 20:50:58 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:30.749 20:50:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.749 20:50:58 -- common/autotest_common.sh@10 -- # set +x 00:08:30.749 20:50:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.749 20:50:58 -- event/cpu_locks.sh@71 -- # locks_exist 104513 00:08:30.749 20:50:58 -- event/cpu_locks.sh@22 -- # lslocks -p 104513 00:08:30.749 20:50:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:31.007 20:50:58 -- event/cpu_locks.sh@73 -- # killprocess 104513 00:08:31.007 20:50:58 -- common/autotest_common.sh@926 -- # '[' -z 104513 ']' 00:08:31.007 20:50:58 -- common/autotest_common.sh@930 -- # kill -0 104513 00:08:31.007 20:50:58 -- common/autotest_common.sh@931 -- # uname 00:08:31.007 20:50:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:31.007 20:50:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104513 00:08:31.007 20:50:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:31.007 20:50:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:31.007 20:50:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104513' 00:08:31.007 killing process with pid 104513 00:08:31.007 20:50:58 -- common/autotest_common.sh@945 -- # kill 104513 00:08:31.007 20:50:58 -- common/autotest_common.sh@950 -- # wait 104513 00:08:32.914 00:08:32.914 real 0m4.044s 00:08:32.914 user 0m4.269s 00:08:32.914 sys 0m0.682s 00:08:32.914 20:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.914 ************************************ 00:08:32.914 END TEST default_locks_via_rpc 00:08:32.914 ************************************ 00:08:32.914 20:51:00 -- common/autotest_common.sh@10 -- # set +x 00:08:32.914 20:51:01 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:32.914 20:51:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.914 20:51:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.914 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:32.914 ************************************ 00:08:32.914 START TEST non_locking_app_on_locked_coremask 00:08:32.915 ************************************ 00:08:32.915 20:51:01 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:08:32.915 20:51:01 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=104599 00:08:32.915 20:51:01 -- event/cpu_locks.sh@81 -- # waitforlisten 104599 /var/tmp/spdk.sock 00:08:32.915 20:51:01 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:32.915 20:51:01 -- common/autotest_common.sh@819 -- # '[' -z 104599 ']' 00:08:32.915 20:51:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.915 20:51:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:32.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.915 20:51:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.915 20:51:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:32.915 20:51:01 -- common/autotest_common.sh@10 -- # set +x 00:08:33.173 [2024-06-09 20:51:01.113313] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:33.173 [2024-06-09 20:51:01.113536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104599 ] 00:08:33.173 [2024-06-09 20:51:01.281865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.431 [2024-06-09 20:51:01.533452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.431 [2024-06-09 20:51:01.533792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.804 20:51:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:34.804 20:51:02 -- common/autotest_common.sh@852 -- # return 0 00:08:34.804 20:51:02 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:34.804 20:51:02 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=104636 00:08:34.804 20:51:02 -- event/cpu_locks.sh@85 -- # waitforlisten 104636 /var/tmp/spdk2.sock 00:08:34.804 20:51:02 -- common/autotest_common.sh@819 -- # '[' -z 104636 ']' 00:08:34.804 20:51:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:34.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:34.804 20:51:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:34.804 20:51:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:34.804 20:51:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:34.804 20:51:02 -- common/autotest_common.sh@10 -- # set +x 00:08:34.804 [2024-06-09 20:51:02.856673] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:34.804 [2024-06-09 20:51:02.856928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104636 ] 00:08:35.062 [2024-06-09 20:51:03.013141] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:35.062 [2024-06-09 20:51:03.013235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.321 [2024-06-09 20:51:03.470246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:35.321 [2024-06-09 20:51:03.470528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.223 20:51:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:37.223 20:51:05 -- common/autotest_common.sh@852 -- # return 0 00:08:37.223 20:51:05 -- event/cpu_locks.sh@87 -- # locks_exist 104599 00:08:37.223 20:51:05 -- event/cpu_locks.sh@22 -- # lslocks -p 104599 00:08:37.223 20:51:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:37.481 20:51:05 -- event/cpu_locks.sh@89 -- # killprocess 104599 00:08:37.481 20:51:05 -- common/autotest_common.sh@926 -- # '[' -z 104599 ']' 00:08:37.481 20:51:05 -- common/autotest_common.sh@930 -- # kill -0 104599 00:08:37.481 20:51:05 -- common/autotest_common.sh@931 -- # uname 00:08:37.481 20:51:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:37.481 20:51:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104599 00:08:37.740 20:51:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:37.740 killing process with pid 104599 00:08:37.740 20:51:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:37.740 20:51:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104599' 00:08:37.740 20:51:05 -- common/autotest_common.sh@945 -- # kill 104599 00:08:37.740 20:51:05 -- common/autotest_common.sh@950 -- # wait 104599 00:08:41.932 20:51:10 -- event/cpu_locks.sh@90 -- # killprocess 104636 00:08:41.932 20:51:10 -- common/autotest_common.sh@926 -- # '[' -z 104636 ']' 00:08:41.932 20:51:10 -- common/autotest_common.sh@930 -- # kill -0 104636 00:08:41.932 20:51:10 -- common/autotest_common.sh@931 -- # uname 00:08:41.932 20:51:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:41.932 20:51:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104636 00:08:41.932 20:51:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:41.932 killing process with pid 104636 00:08:41.932 20:51:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:41.932 20:51:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104636' 00:08:41.932 20:51:10 -- common/autotest_common.sh@945 -- # kill 104636 00:08:41.932 20:51:10 -- common/autotest_common.sh@950 -- # wait 104636 00:08:44.466 00:08:44.466 real 0m11.380s 00:08:44.466 user 0m12.171s 00:08:44.466 sys 0m1.348s 00:08:44.466 ************************************ 00:08:44.466 END TEST non_locking_app_on_locked_coremask 00:08:44.466 ************************************ 00:08:44.466 20:51:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.466 20:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:44.466 20:51:12 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:44.466 20:51:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:44.466 20:51:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.466 20:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:44.466 ************************************ 00:08:44.466 START TEST locking_app_on_unlocked_coremask 00:08:44.466 ************************************ 00:08:44.466 20:51:12 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:08:44.466 20:51:12 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=104788 00:08:44.466 20:51:12 -- event/cpu_locks.sh@99 -- # waitforlisten 104788 /var/tmp/spdk.sock 00:08:44.466 20:51:12 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:44.466 20:51:12 -- common/autotest_common.sh@819 -- # '[' -z 104788 ']' 00:08:44.466 20:51:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.466 20:51:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:44.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.466 20:51:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.466 20:51:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:44.466 20:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:44.466 [2024-06-09 20:51:12.555355] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:44.466 [2024-06-09 20:51:12.555558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104788 ] 00:08:44.725 [2024-06-09 20:51:12.723973] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:44.725 [2024-06-09 20:51:12.724068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.984 [2024-06-09 20:51:12.917560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.984 [2024-06-09 20:51:12.917835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.358 20:51:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.358 20:51:14 -- common/autotest_common.sh@852 -- # return 0 00:08:46.358 20:51:14 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=104823 00:08:46.358 20:51:14 -- event/cpu_locks.sh@103 -- # waitforlisten 104823 /var/tmp/spdk2.sock 00:08:46.358 20:51:14 -- common/autotest_common.sh@819 -- # '[' -z 104823 ']' 00:08:46.358 20:51:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:46.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:46.358 20:51:14 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:46.358 20:51:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.358 20:51:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:46.358 20:51:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.358 20:51:14 -- common/autotest_common.sh@10 -- # set +x 00:08:46.358 [2024-06-09 20:51:14.324896] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:46.358 [2024-06-09 20:51:14.325105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104823 ] 00:08:46.358 [2024-06-09 20:51:14.485893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.925 [2024-06-09 20:51:14.849918] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.925 [2024-06-09 20:51:14.850143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.827 20:51:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:48.827 20:51:16 -- common/autotest_common.sh@852 -- # return 0 00:08:48.827 20:51:16 -- event/cpu_locks.sh@105 -- # locks_exist 104823 00:08:48.827 20:51:16 -- event/cpu_locks.sh@22 -- # lslocks -p 104823 00:08:48.827 20:51:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:49.085 20:51:17 -- event/cpu_locks.sh@107 -- # killprocess 104788 00:08:49.085 20:51:17 -- common/autotest_common.sh@926 -- # '[' -z 104788 ']' 00:08:49.085 20:51:17 -- common/autotest_common.sh@930 -- # kill -0 104788 00:08:49.085 20:51:17 -- common/autotest_common.sh@931 -- # uname 00:08:49.085 20:51:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:49.085 20:51:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104788 00:08:49.085 20:51:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:49.085 killing process with pid 104788 00:08:49.085 20:51:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:49.085 20:51:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104788' 00:08:49.085 20:51:17 -- common/autotest_common.sh@945 -- # kill 104788 00:08:49.085 20:51:17 -- common/autotest_common.sh@950 -- # wait 104788 00:08:53.271 20:51:21 -- event/cpu_locks.sh@108 -- # killprocess 104823 00:08:53.271 20:51:21 -- common/autotest_common.sh@926 -- # '[' -z 104823 ']' 00:08:53.271 20:51:21 -- common/autotest_common.sh@930 -- # kill -0 104823 00:08:53.271 20:51:21 -- common/autotest_common.sh@931 -- # uname 00:08:53.271 20:51:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:53.271 20:51:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104823 00:08:53.271 20:51:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:53.271 killing process with pid 104823 00:08:53.271 20:51:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:53.271 20:51:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104823' 00:08:53.271 20:51:21 -- common/autotest_common.sh@945 -- # kill 104823 00:08:53.271 20:51:21 -- common/autotest_common.sh@950 -- # wait 104823 00:08:55.168 00:08:55.168 real 0m10.389s 00:08:55.168 user 0m11.220s 00:08:55.168 sys 0m1.264s 00:08:55.168 20:51:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.168 20:51:22 -- common/autotest_common.sh@10 -- # set +x 00:08:55.168 ************************************ 00:08:55.168 END TEST locking_app_on_unlocked_coremask 00:08:55.168 ************************************ 00:08:55.168 20:51:22 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:55.168 20:51:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:55.168 20:51:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:55.168 20:51:22 -- common/autotest_common.sh@10 -- # set +x 00:08:55.168 ************************************ 00:08:55.168 START TEST locking_app_on_locked_coremask 00:08:55.168 ************************************ 00:08:55.168 20:51:22 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:08:55.168 20:51:22 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=104970 00:08:55.168 20:51:22 -- event/cpu_locks.sh@116 -- # waitforlisten 104970 /var/tmp/spdk.sock 00:08:55.168 20:51:22 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:55.169 20:51:22 -- common/autotest_common.sh@819 -- # '[' -z 104970 ']' 00:08:55.169 20:51:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.169 20:51:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:55.169 20:51:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.169 20:51:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:55.169 20:51:22 -- common/autotest_common.sh@10 -- # set +x 00:08:55.169 [2024-06-09 20:51:22.995944] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:55.169 [2024-06-09 20:51:22.996158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104970 ] 00:08:55.169 [2024-06-09 20:51:23.158029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.169 [2024-06-09 20:51:23.319853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:55.169 [2024-06-09 20:51:23.320086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.543 20:51:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.543 20:51:24 -- common/autotest_common.sh@852 -- # return 0 00:08:56.543 20:51:24 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=104993 00:08:56.544 20:51:24 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:56.544 20:51:24 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 104993 /var/tmp/spdk2.sock 00:08:56.544 20:51:24 -- common/autotest_common.sh@640 -- # local es=0 00:08:56.544 20:51:24 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 104993 /var/tmp/spdk2.sock 00:08:56.544 20:51:24 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:56.544 20:51:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:56.544 20:51:24 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:56.544 20:51:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:56.544 20:51:24 -- common/autotest_common.sh@643 -- # waitforlisten 104993 /var/tmp/spdk2.sock 00:08:56.544 20:51:24 -- common/autotest_common.sh@819 -- # '[' -z 104993 ']' 00:08:56.544 20:51:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.544 20:51:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:56.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.544 20:51:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.544 20:51:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:56.544 20:51:24 -- common/autotest_common.sh@10 -- # set +x 00:08:56.544 [2024-06-09 20:51:24.619561] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:56.544 [2024-06-09 20:51:24.619779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104993 ] 00:08:56.802 [2024-06-09 20:51:24.790928] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 104970 has claimed it. 00:08:56.802 [2024-06-09 20:51:24.791060] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:57.368 ERROR: process (pid: 104993) is no longer running 00:08:57.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (104993) - No such process 00:08:57.368 20:51:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:57.368 20:51:25 -- common/autotest_common.sh@852 -- # return 1 00:08:57.368 20:51:25 -- common/autotest_common.sh@643 -- # es=1 00:08:57.369 20:51:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:57.369 20:51:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:57.369 20:51:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:57.369 20:51:25 -- event/cpu_locks.sh@122 -- # locks_exist 104970 00:08:57.369 20:51:25 -- event/cpu_locks.sh@22 -- # lslocks -p 104970 00:08:57.369 20:51:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:57.369 20:51:25 -- event/cpu_locks.sh@124 -- # killprocess 104970 00:08:57.369 20:51:25 -- common/autotest_common.sh@926 -- # '[' -z 104970 ']' 00:08:57.369 20:51:25 -- common/autotest_common.sh@930 -- # kill -0 104970 00:08:57.369 20:51:25 -- common/autotest_common.sh@931 -- # uname 00:08:57.369 20:51:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:57.369 20:51:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104970 00:08:57.627 20:51:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:57.627 killing process with pid 104970 00:08:57.627 20:51:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:57.627 20:51:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104970' 00:08:57.627 20:51:25 -- common/autotest_common.sh@945 -- # kill 104970 00:08:57.627 20:51:25 -- common/autotest_common.sh@950 -- # wait 104970 00:08:59.531 00:08:59.531 real 0m4.411s 00:08:59.531 user 0m4.746s 00:08:59.531 sys 0m0.754s 00:08:59.531 20:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.531 20:51:27 -- common/autotest_common.sh@10 -- # set +x 00:08:59.531 ************************************ 00:08:59.531 END TEST locking_app_on_locked_coremask 00:08:59.531 ************************************ 00:08:59.531 20:51:27 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:59.531 20:51:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:59.531 20:51:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:59.531 20:51:27 -- common/autotest_common.sh@10 -- # set +x 00:08:59.531 ************************************ 00:08:59.531 START TEST locking_overlapped_coremask 00:08:59.531 ************************************ 00:08:59.531 20:51:27 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:08:59.531 20:51:27 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=105062 00:08:59.531 20:51:27 -- event/cpu_locks.sh@133 -- # waitforlisten 105062 /var/tmp/spdk.sock 00:08:59.531 20:51:27 -- common/autotest_common.sh@819 -- # '[' -z 105062 ']' 00:08:59.531 20:51:27 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:59.531 20:51:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.531 20:51:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.531 20:51:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.531 20:51:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.531 20:51:27 -- common/autotest_common.sh@10 -- # set +x 00:08:59.531 [2024-06-09 20:51:27.453774] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:59.531 [2024-06-09 20:51:27.454507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105062 ] 00:08:59.531 [2024-06-09 20:51:27.630464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.790 [2024-06-09 20:51:27.801385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.790 [2024-06-09 20:51:27.801805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.790 [2024-06-09 20:51:27.802309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.790 [2024-06-09 20:51:27.802355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.216 20:51:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:01.216 20:51:29 -- common/autotest_common.sh@852 -- # return 0 00:09:01.216 20:51:29 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:01.216 20:51:29 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=105099 00:09:01.216 20:51:29 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 105099 /var/tmp/spdk2.sock 00:09:01.216 20:51:29 -- common/autotest_common.sh@640 -- # local es=0 00:09:01.216 20:51:29 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 105099 /var/tmp/spdk2.sock 00:09:01.216 20:51:29 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:01.216 20:51:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:01.216 20:51:29 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:01.216 20:51:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:01.216 20:51:29 -- common/autotest_common.sh@643 -- # waitforlisten 105099 /var/tmp/spdk2.sock 00:09:01.216 20:51:29 -- common/autotest_common.sh@819 -- # '[' -z 105099 ']' 00:09:01.216 20:51:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:01.216 20:51:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:01.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:01.216 20:51:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:01.216 20:51:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:01.216 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:09:01.216 [2024-06-09 20:51:29.179663] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:01.216 [2024-06-09 20:51:29.180265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105099 ] 00:09:01.216 [2024-06-09 20:51:29.370495] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105062 has claimed it. 00:09:01.216 [2024-06-09 20:51:29.370609] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:01.784 ERROR: process (pid: 105099) is no longer running 00:09:01.784 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (105099) - No such process 00:09:01.784 20:51:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:01.784 20:51:29 -- common/autotest_common.sh@852 -- # return 1 00:09:01.784 20:51:29 -- common/autotest_common.sh@643 -- # es=1 00:09:01.784 20:51:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:01.784 20:51:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:01.784 20:51:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:01.784 20:51:29 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:01.784 20:51:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:01.784 20:51:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:01.784 20:51:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:01.784 20:51:29 -- event/cpu_locks.sh@141 -- # killprocess 105062 00:09:01.784 20:51:29 -- common/autotest_common.sh@926 -- # '[' -z 105062 ']' 00:09:01.784 20:51:29 -- common/autotest_common.sh@930 -- # kill -0 105062 00:09:01.784 20:51:29 -- common/autotest_common.sh@931 -- # uname 00:09:01.784 20:51:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:01.784 20:51:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105062 00:09:01.784 20:51:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:01.784 killing process with pid 105062 00:09:01.784 20:51:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:01.784 20:51:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105062' 00:09:01.784 20:51:29 -- common/autotest_common.sh@945 -- # kill 105062 00:09:01.784 20:51:29 -- common/autotest_common.sh@950 -- # wait 105062 00:09:03.686 00:09:03.686 real 0m4.407s 00:09:03.686 user 0m12.113s 00:09:03.686 sys 0m0.608s 00:09:03.686 ************************************ 00:09:03.686 END TEST locking_overlapped_coremask 00:09:03.686 ************************************ 00:09:03.686 20:51:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.686 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:09:03.686 20:51:31 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:03.686 20:51:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.686 20:51:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.686 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:09:03.686 ************************************ 00:09:03.686 START TEST locking_overlapped_coremask_via_rpc 00:09:03.686 ************************************ 00:09:03.686 20:51:31 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:03.686 20:51:31 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=105156 00:09:03.686 20:51:31 -- event/cpu_locks.sh@149 -- # waitforlisten 105156 /var/tmp/spdk.sock 00:09:03.686 20:51:31 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:03.686 20:51:31 -- common/autotest_common.sh@819 -- # '[' -z 105156 ']' 00:09:03.686 20:51:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.686 20:51:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:03.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.686 20:51:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.686 20:51:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:03.686 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:09:03.945 [2024-06-09 20:51:31.925895] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:03.945 [2024-06-09 20:51:31.926185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105156 ] 00:09:03.945 [2024-06-09 20:51:32.111499] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:03.945 [2024-06-09 20:51:32.111565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.204 [2024-06-09 20:51:32.298692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.204 [2024-06-09 20:51:32.299086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.204 [2024-06-09 20:51:32.299189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.204 [2024-06-09 20:51:32.299183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.581 20:51:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:05.581 20:51:33 -- common/autotest_common.sh@852 -- # return 0 00:09:05.581 20:51:33 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=105195 00:09:05.581 20:51:33 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:05.581 20:51:33 -- event/cpu_locks.sh@153 -- # waitforlisten 105195 /var/tmp/spdk2.sock 00:09:05.581 20:51:33 -- common/autotest_common.sh@819 -- # '[' -z 105195 ']' 00:09:05.581 20:51:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:05.581 20:51:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:05.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:05.581 20:51:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:05.581 20:51:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:05.581 20:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:05.581 [2024-06-09 20:51:33.659657] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:05.581 [2024-06-09 20:51:33.659878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105195 ] 00:09:05.840 [2024-06-09 20:51:33.842782] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:05.840 [2024-06-09 20:51:33.842857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.099 [2024-06-09 20:51:34.204055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:06.099 [2024-06-09 20:51:34.204813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.100 [2024-06-09 20:51:34.217827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.100 [2024-06-09 20:51:34.217829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:08.001 20:51:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.001 20:51:36 -- common/autotest_common.sh@852 -- # return 0 00:09:08.001 20:51:36 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:08.001 20:51:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:08.001 20:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:08.001 20:51:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:08.001 20:51:36 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:08.001 20:51:36 -- common/autotest_common.sh@640 -- # local es=0 00:09:08.001 20:51:36 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:08.001 20:51:36 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:08.001 20:51:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:08.001 20:51:36 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:08.001 20:51:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:08.001 20:51:36 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:08.001 20:51:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:08.001 20:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:08.001 [2024-06-09 20:51:36.057769] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105156 has claimed it. 00:09:08.001 request: 00:09:08.001 { 00:09:08.001 "method": "framework_enable_cpumask_locks", 00:09:08.001 "req_id": 1 00:09:08.001 } 00:09:08.001 Got JSON-RPC error response 00:09:08.001 response: 00:09:08.001 { 00:09:08.001 "code": -32603, 00:09:08.001 "message": "Failed to claim CPU core: 2" 00:09:08.001 } 00:09:08.001 20:51:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:08.001 20:51:36 -- common/autotest_common.sh@643 -- # es=1 00:09:08.001 20:51:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:08.001 20:51:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:08.001 20:51:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:08.001 20:51:36 -- event/cpu_locks.sh@158 -- # waitforlisten 105156 /var/tmp/spdk.sock 00:09:08.001 20:51:36 -- common/autotest_common.sh@819 -- # '[' -z 105156 ']' 00:09:08.001 20:51:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.001 20:51:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.001 20:51:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.001 20:51:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.001 20:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:08.261 20:51:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.261 20:51:36 -- common/autotest_common.sh@852 -- # return 0 00:09:08.261 20:51:36 -- event/cpu_locks.sh@159 -- # waitforlisten 105195 /var/tmp/spdk2.sock 00:09:08.261 20:51:36 -- common/autotest_common.sh@819 -- # '[' -z 105195 ']' 00:09:08.261 20:51:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:08.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:08.261 20:51:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.261 20:51:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:08.261 20:51:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.261 20:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:08.520 20:51:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.520 20:51:36 -- common/autotest_common.sh@852 -- # return 0 00:09:08.520 20:51:36 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:08.520 20:51:36 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:08.520 20:51:36 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:08.520 20:51:36 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:08.520 00:09:08.520 real 0m4.621s 00:09:08.520 user 0m1.842s 00:09:08.520 sys 0m0.227s 00:09:08.520 20:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.520 ************************************ 00:09:08.520 END TEST locking_overlapped_coremask_via_rpc 00:09:08.520 ************************************ 00:09:08.520 20:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:08.520 20:51:36 -- event/cpu_locks.sh@174 -- # cleanup 00:09:08.520 20:51:36 -- event/cpu_locks.sh@15 -- # [[ -z 105156 ]] 00:09:08.520 20:51:36 -- event/cpu_locks.sh@15 -- # killprocess 105156 00:09:08.520 20:51:36 -- common/autotest_common.sh@926 -- # '[' -z 105156 ']' 00:09:08.520 20:51:36 -- common/autotest_common.sh@930 -- # kill -0 105156 00:09:08.520 20:51:36 -- common/autotest_common.sh@931 -- # uname 00:09:08.520 20:51:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:08.520 20:51:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105156 00:09:08.520 20:51:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:08.520 killing process with pid 105156 00:09:08.520 20:51:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:08.520 20:51:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105156' 00:09:08.520 20:51:36 -- common/autotest_common.sh@945 -- # kill 105156 00:09:08.520 20:51:36 -- common/autotest_common.sh@950 -- # wait 105156 00:09:10.426 20:51:38 -- event/cpu_locks.sh@16 -- # [[ -z 105195 ]] 00:09:10.426 20:51:38 -- event/cpu_locks.sh@16 -- # killprocess 105195 00:09:10.426 20:51:38 -- common/autotest_common.sh@926 -- # '[' -z 105195 ']' 00:09:10.426 20:51:38 -- common/autotest_common.sh@930 -- # kill -0 105195 00:09:10.426 20:51:38 -- common/autotest_common.sh@931 -- # uname 00:09:10.426 20:51:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:10.426 20:51:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105195 00:09:10.685 20:51:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:10.686 killing process with pid 105195 00:09:10.686 20:51:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:10.686 20:51:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105195' 00:09:10.686 20:51:38 -- common/autotest_common.sh@945 -- # kill 105195 00:09:10.686 20:51:38 -- common/autotest_common.sh@950 -- # wait 105195 00:09:12.591 20:51:40 -- event/cpu_locks.sh@18 -- # rm -f 00:09:12.591 20:51:40 -- event/cpu_locks.sh@1 -- # cleanup 00:09:12.591 20:51:40 -- event/cpu_locks.sh@15 -- # [[ -z 105156 ]] 00:09:12.591 20:51:40 -- event/cpu_locks.sh@15 -- # killprocess 105156 00:09:12.591 20:51:40 -- common/autotest_common.sh@926 -- # '[' -z 105156 ']' 00:09:12.591 20:51:40 -- common/autotest_common.sh@930 -- # kill -0 105156 00:09:12.591 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (105156) - No such process 00:09:12.591 Process with pid 105156 is not found 00:09:12.591 20:51:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 105156 is not found' 00:09:12.591 20:51:40 -- event/cpu_locks.sh@16 -- # [[ -z 105195 ]] 00:09:12.591 20:51:40 -- event/cpu_locks.sh@16 -- # killprocess 105195 00:09:12.591 20:51:40 -- common/autotest_common.sh@926 -- # '[' -z 105195 ']' 00:09:12.591 Process with pid 105195 is not found 00:09:12.591 20:51:40 -- common/autotest_common.sh@930 -- # kill -0 105195 00:09:12.591 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (105195) - No such process 00:09:12.591 20:51:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 105195 is not found' 00:09:12.591 20:51:40 -- event/cpu_locks.sh@18 -- # rm -f 00:09:12.591 00:09:12.591 real 0m48.103s 00:09:12.591 user 1m23.121s 00:09:12.591 sys 0m6.544s 00:09:12.591 20:51:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.591 ************************************ 00:09:12.591 END TEST cpu_locks 00:09:12.591 ************************************ 00:09:12.591 20:51:40 -- common/autotest_common.sh@10 -- # set +x 00:09:12.591 00:09:12.591 real 1m19.596s 00:09:12.591 user 2m24.113s 00:09:12.591 sys 0m10.339s 00:09:12.591 20:51:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.591 20:51:40 -- common/autotest_common.sh@10 -- # set +x 00:09:12.591 ************************************ 00:09:12.591 END TEST event 00:09:12.591 ************************************ 00:09:12.591 20:51:40 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:12.591 20:51:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:12.591 20:51:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.592 20:51:40 -- common/autotest_common.sh@10 -- # set +x 00:09:12.592 ************************************ 00:09:12.592 START TEST thread 00:09:12.592 ************************************ 00:09:12.592 20:51:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:12.592 * Looking for test storage... 00:09:12.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:12.592 20:51:40 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:12.592 20:51:40 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:12.592 20:51:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:12.592 20:51:40 -- common/autotest_common.sh@10 -- # set +x 00:09:12.592 ************************************ 00:09:12.592 START TEST thread_poller_perf 00:09:12.592 ************************************ 00:09:12.592 20:51:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:12.592 [2024-06-09 20:51:40.716159] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:12.592 [2024-06-09 20:51:40.716393] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105379 ] 00:09:12.850 [2024-06-09 20:51:40.885768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.109 [2024-06-09 20:51:41.101207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.109 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:14.486 ====================================== 00:09:14.486 busy:2208506094 (cyc) 00:09:14.486 total_run_count: 362000 00:09:14.486 tsc_hz: 2200000000 (cyc) 00:09:14.486 ====================================== 00:09:14.486 poller_cost: 6100 (cyc), 2772 (nsec) 00:09:14.486 00:09:14.486 real 0m1.785s 00:09:14.486 user 0m1.552s 00:09:14.486 sys 0m0.133s 00:09:14.486 20:51:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.486 ************************************ 00:09:14.486 END TEST thread_poller_perf 00:09:14.486 20:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:14.486 ************************************ 00:09:14.486 20:51:42 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:14.486 20:51:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:14.486 20:51:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.486 20:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:14.486 ************************************ 00:09:14.486 START TEST thread_poller_perf 00:09:14.486 ************************************ 00:09:14.486 20:51:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:14.486 [2024-06-09 20:51:42.548595] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:14.486 [2024-06-09 20:51:42.548813] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105431 ] 00:09:14.745 [2024-06-09 20:51:42.716552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.745 [2024-06-09 20:51:42.901298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.745 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:16.120 ====================================== 00:09:16.120 busy:2205180352 (cyc) 00:09:16.120 total_run_count: 4418000 00:09:16.120 tsc_hz: 2200000000 (cyc) 00:09:16.120 ====================================== 00:09:16.120 poller_cost: 499 (cyc), 226 (nsec) 00:09:16.120 00:09:16.120 real 0m1.755s 00:09:16.120 user 0m1.515s 00:09:16.120 sys 0m0.141s 00:09:16.120 20:51:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.120 ************************************ 00:09:16.120 END TEST thread_poller_perf 00:09:16.120 ************************************ 00:09:16.120 20:51:44 -- common/autotest_common.sh@10 -- # set +x 00:09:16.379 20:51:44 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:16.379 20:51:44 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:16.379 20:51:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:16.379 20:51:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:16.379 20:51:44 -- common/autotest_common.sh@10 -- # set +x 00:09:16.379 ************************************ 00:09:16.379 START TEST thread_spdk_lock 00:09:16.379 ************************************ 00:09:16.379 20:51:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:16.379 [2024-06-09 20:51:44.357909] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:16.379 [2024-06-09 20:51:44.358695] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105479 ] 00:09:16.379 [2024-06-09 20:51:44.535673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.636 [2024-06-09 20:51:44.698992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.636 [2024-06-09 20:51:44.698996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.203 [2024-06-09 20:51:45.215414] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:17.203 [2024-06-09 20:51:45.215526] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:17.203 [2024-06-09 20:51:45.215568] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x5639b20f0a00 00:09:17.203 [2024-06-09 20:51:45.222701] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:17.203 [2024-06-09 20:51:45.222822] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:17.203 [2024-06-09 20:51:45.222880] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:17.463 Starting test contend 00:09:17.463 Worker Delay Wait us Hold us Total us 00:09:17.463 0 3 138050 191157 329207 00:09:17.463 1 5 58909 294847 353757 00:09:17.463 PASS test contend 00:09:17.463 Starting test hold_by_poller 00:09:17.463 PASS test hold_by_poller 00:09:17.463 Starting test hold_by_message 00:09:17.463 PASS test hold_by_message 00:09:17.463 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:17.463 100014 assertions passed 00:09:17.463 0 assertions failed 00:09:17.463 00:09:17.463 real 0m1.235s 00:09:17.463 user 0m1.530s 00:09:17.463 sys 0m0.128s 00:09:17.463 20:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.463 ************************************ 00:09:17.463 END TEST thread_spdk_lock 00:09:17.463 ************************************ 00:09:17.463 20:51:45 -- common/autotest_common.sh@10 -- # set +x 00:09:17.463 00:09:17.463 real 0m4.992s 00:09:17.463 user 0m4.713s 00:09:17.463 sys 0m0.495s 00:09:17.463 20:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.463 20:51:45 -- common/autotest_common.sh@10 -- # set +x 00:09:17.463 ************************************ 00:09:17.463 END TEST thread 00:09:17.463 ************************************ 00:09:17.463 20:51:45 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:17.463 20:51:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:17.463 20:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.463 20:51:45 -- common/autotest_common.sh@10 -- # set +x 00:09:17.463 ************************************ 00:09:17.463 START TEST accel 00:09:17.463 ************************************ 00:09:17.463 20:51:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:17.722 * Looking for test storage... 00:09:17.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:17.722 20:51:45 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:17.722 20:51:45 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:17.722 20:51:45 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:17.722 20:51:45 -- accel/accel.sh@59 -- # spdk_tgt_pid=105557 00:09:17.722 20:51:45 -- accel/accel.sh@60 -- # waitforlisten 105557 00:09:17.722 20:51:45 -- common/autotest_common.sh@819 -- # '[' -z 105557 ']' 00:09:17.722 20:51:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.722 20:51:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:17.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.722 20:51:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.722 20:51:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:17.722 20:51:45 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:17.722 20:51:45 -- common/autotest_common.sh@10 -- # set +x 00:09:17.722 20:51:45 -- accel/accel.sh@58 -- # build_accel_config 00:09:17.722 20:51:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:17.722 20:51:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:17.722 20:51:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:17.722 20:51:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:17.722 20:51:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:17.722 20:51:45 -- accel/accel.sh@41 -- # local IFS=, 00:09:17.722 20:51:45 -- accel/accel.sh@42 -- # jq -r . 00:09:17.722 [2024-06-09 20:51:45.774483] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:17.722 [2024-06-09 20:51:45.774669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105557 ] 00:09:17.981 [2024-06-09 20:51:45.926308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.981 [2024-06-09 20:51:46.091667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:17.981 [2024-06-09 20:51:46.091932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.358 20:51:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:19.358 20:51:47 -- common/autotest_common.sh@852 -- # return 0 00:09:19.358 20:51:47 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:19.358 20:51:47 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:19.358 20:51:47 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:19.358 20:51:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.358 20:51:47 -- common/autotest_common.sh@10 -- # set +x 00:09:19.358 20:51:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # IFS== 00:09:19.358 20:51:47 -- accel/accel.sh@64 -- # read -r opc module 00:09:19.358 20:51:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:19.358 20:51:47 -- accel/accel.sh@67 -- # killprocess 105557 00:09:19.358 20:51:47 -- common/autotest_common.sh@926 -- # '[' -z 105557 ']' 00:09:19.358 20:51:47 -- common/autotest_common.sh@930 -- # kill -0 105557 00:09:19.358 20:51:47 -- common/autotest_common.sh@931 -- # uname 00:09:19.358 20:51:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:19.358 20:51:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105557 00:09:19.358 20:51:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:19.358 killing process with pid 105557 00:09:19.358 20:51:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:19.358 20:51:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105557' 00:09:19.358 20:51:47 -- common/autotest_common.sh@945 -- # kill 105557 00:09:19.358 20:51:47 -- common/autotest_common.sh@950 -- # wait 105557 00:09:21.261 20:51:49 -- accel/accel.sh@68 -- # trap - ERR 00:09:21.261 20:51:49 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:21.261 20:51:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:21.261 20:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.261 20:51:49 -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 20:51:49 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:09:21.261 20:51:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:21.261 20:51:49 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.261 20:51:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.261 20:51:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.261 20:51:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.261 20:51:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.261 20:51:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.261 20:51:49 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.261 20:51:49 -- accel/accel.sh@42 -- # jq -r . 00:09:21.261 20:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.261 20:51:49 -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 20:51:49 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:21.261 20:51:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:21.261 20:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.261 20:51:49 -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 ************************************ 00:09:21.261 START TEST accel_missing_filename 00:09:21.261 ************************************ 00:09:21.262 20:51:49 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:09:21.262 20:51:49 -- common/autotest_common.sh@640 -- # local es=0 00:09:21.262 20:51:49 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:21.262 20:51:49 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:21.262 20:51:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:21.262 20:51:49 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:21.262 20:51:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:21.262 20:51:49 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:09:21.262 20:51:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:21.262 20:51:49 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.262 20:51:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.262 20:51:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.262 20:51:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.262 20:51:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.262 20:51:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.262 20:51:49 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.262 20:51:49 -- accel/accel.sh@42 -- # jq -r . 00:09:21.520 [2024-06-09 20:51:49.473954] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:21.520 [2024-06-09 20:51:49.474203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105656 ] 00:09:21.520 [2024-06-09 20:51:49.644261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.778 [2024-06-09 20:51:49.828749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.037 [2024-06-09 20:51:50.022043] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.296 [2024-06-09 20:51:50.461327] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:22.863 A filename is required. 00:09:22.863 20:51:50 -- common/autotest_common.sh@643 -- # es=234 00:09:22.863 20:51:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:22.863 20:51:50 -- common/autotest_common.sh@652 -- # es=106 00:09:22.863 20:51:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:22.863 20:51:50 -- common/autotest_common.sh@660 -- # es=1 00:09:22.863 20:51:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:22.863 00:09:22.863 real 0m1.390s 00:09:22.863 user 0m1.122s 00:09:22.863 sys 0m0.220s 00:09:22.863 20:51:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.863 ************************************ 00:09:22.863 END TEST accel_missing_filename 00:09:22.863 ************************************ 00:09:22.863 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:09:22.863 20:51:50 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:22.863 20:51:50 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:22.863 20:51:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.863 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:09:22.863 ************************************ 00:09:22.863 START TEST accel_compress_verify 00:09:22.863 ************************************ 00:09:22.863 20:51:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:22.863 20:51:50 -- common/autotest_common.sh@640 -- # local es=0 00:09:22.863 20:51:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:22.863 20:51:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:22.863 20:51:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:22.863 20:51:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:22.863 20:51:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:22.863 20:51:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:22.863 20:51:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:22.863 20:51:50 -- accel/accel.sh@12 -- # build_accel_config 00:09:22.863 20:51:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:22.863 20:51:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.863 20:51:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.863 20:51:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:22.863 20:51:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:22.863 20:51:50 -- accel/accel.sh@41 -- # local IFS=, 00:09:22.863 20:51:50 -- accel/accel.sh@42 -- # jq -r . 00:09:22.863 [2024-06-09 20:51:50.917277] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:22.863 [2024-06-09 20:51:50.918056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105694 ] 00:09:23.122 [2024-06-09 20:51:51.083844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.122 [2024-06-09 20:51:51.276629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.381 [2024-06-09 20:51:51.464565] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.947 [2024-06-09 20:51:51.911480] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:24.204 00:09:24.204 Compression does not support the verify option, aborting. 00:09:24.204 20:51:52 -- common/autotest_common.sh@643 -- # es=161 00:09:24.204 20:51:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:24.204 20:51:52 -- common/autotest_common.sh@652 -- # es=33 00:09:24.204 20:51:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:24.204 20:51:52 -- common/autotest_common.sh@660 -- # es=1 00:09:24.204 20:51:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:24.204 00:09:24.204 real 0m1.383s 00:09:24.204 user 0m1.133s 00:09:24.204 sys 0m0.201s 00:09:24.204 20:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.204 ************************************ 00:09:24.204 END TEST accel_compress_verify 00:09:24.204 ************************************ 00:09:24.204 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:24.204 20:51:52 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:24.204 20:51:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:24.204 20:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.205 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:24.205 ************************************ 00:09:24.205 START TEST accel_wrong_workload 00:09:24.205 ************************************ 00:09:24.205 20:51:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:09:24.205 20:51:52 -- common/autotest_common.sh@640 -- # local es=0 00:09:24.205 20:51:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:24.205 20:51:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:24.205 20:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.205 20:51:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:24.205 20:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.205 20:51:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:09:24.205 20:51:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:24.205 20:51:52 -- accel/accel.sh@12 -- # build_accel_config 00:09:24.205 20:51:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:24.205 20:51:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.205 20:51:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.205 20:51:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:24.205 20:51:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:24.205 20:51:52 -- accel/accel.sh@41 -- # local IFS=, 00:09:24.205 20:51:52 -- accel/accel.sh@42 -- # jq -r . 00:09:24.205 Unsupported workload type: foobar 00:09:24.205 [2024-06-09 20:51:52.355350] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:24.205 accel_perf options: 00:09:24.205 [-h help message] 00:09:24.205 [-q queue depth per core] 00:09:24.205 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:24.205 [-T number of threads per core 00:09:24.205 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:24.205 [-t time in seconds] 00:09:24.205 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:24.205 [ dif_verify, , dif_generate, dif_generate_copy 00:09:24.205 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:24.205 [-l for compress/decompress workloads, name of uncompressed input file 00:09:24.205 [-S for crc32c workload, use this seed value (default 0) 00:09:24.205 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:24.205 [-f for fill workload, use this BYTE value (default 255) 00:09:24.205 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:24.205 [-y verify result if this switch is on] 00:09:24.205 [-a tasks to allocate per core (default: same value as -q)] 00:09:24.205 Can be used to spread operations across a wider range of memory. 00:09:24.482 20:51:52 -- common/autotest_common.sh@643 -- # es=1 00:09:24.482 20:51:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:24.482 20:51:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:24.482 20:51:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:24.482 00:09:24.482 real 0m0.074s 00:09:24.482 user 0m0.080s 00:09:24.482 sys 0m0.049s 00:09:24.482 20:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.482 ************************************ 00:09:24.482 END TEST accel_wrong_workload 00:09:24.482 ************************************ 00:09:24.482 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:24.482 20:51:52 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:24.482 20:51:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:24.482 20:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.482 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:24.482 ************************************ 00:09:24.482 START TEST accel_negative_buffers 00:09:24.482 ************************************ 00:09:24.482 20:51:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:24.482 20:51:52 -- common/autotest_common.sh@640 -- # local es=0 00:09:24.482 20:51:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:24.482 20:51:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:24.482 20:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.482 20:51:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:24.482 20:51:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.482 20:51:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:09:24.482 20:51:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:24.482 20:51:52 -- accel/accel.sh@12 -- # build_accel_config 00:09:24.482 20:51:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:24.482 20:51:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.482 20:51:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.482 20:51:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:24.482 20:51:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:24.482 20:51:52 -- accel/accel.sh@41 -- # local IFS=, 00:09:24.482 20:51:52 -- accel/accel.sh@42 -- # jq -r . 00:09:24.482 -x option must be non-negative. 00:09:24.482 [2024-06-09 20:51:52.480571] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:24.482 accel_perf options: 00:09:24.482 [-h help message] 00:09:24.482 [-q queue depth per core] 00:09:24.482 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:24.482 [-T number of threads per core 00:09:24.482 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:24.482 [-t time in seconds] 00:09:24.482 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:24.482 [ dif_verify, , dif_generate, dif_generate_copy 00:09:24.482 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:24.482 [-l for compress/decompress workloads, name of uncompressed input file 00:09:24.482 [-S for crc32c workload, use this seed value (default 0) 00:09:24.483 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:24.483 [-f for fill workload, use this BYTE value (default 255) 00:09:24.483 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:24.483 [-y verify result if this switch is on] 00:09:24.483 [-a tasks to allocate per core (default: same value as -q)] 00:09:24.483 Can be used to spread operations across a wider range of memory. 00:09:24.483 20:51:52 -- common/autotest_common.sh@643 -- # es=1 00:09:24.483 20:51:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:24.483 20:51:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:24.483 20:51:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:24.483 00:09:24.483 real 0m0.066s 00:09:24.483 user 0m0.084s 00:09:24.483 sys 0m0.034s 00:09:24.483 20:51:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.483 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:24.483 ************************************ 00:09:24.483 END TEST accel_negative_buffers 00:09:24.483 ************************************ 00:09:24.483 20:51:52 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:24.483 20:51:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:24.483 20:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.483 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:24.483 ************************************ 00:09:24.483 START TEST accel_crc32c 00:09:24.483 ************************************ 00:09:24.483 20:51:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:24.483 20:51:52 -- accel/accel.sh@16 -- # local accel_opc 00:09:24.483 20:51:52 -- accel/accel.sh@17 -- # local accel_module 00:09:24.483 20:51:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:24.483 20:51:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:24.483 20:51:52 -- accel/accel.sh@12 -- # build_accel_config 00:09:24.483 20:51:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:24.483 20:51:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:24.483 20:51:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:24.483 20:51:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:24.483 20:51:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:24.483 20:51:52 -- accel/accel.sh@41 -- # local IFS=, 00:09:24.483 20:51:52 -- accel/accel.sh@42 -- # jq -r . 00:09:24.483 [2024-06-09 20:51:52.597384] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:24.483 [2024-06-09 20:51:52.597651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105783 ] 00:09:24.741 [2024-06-09 20:51:52.778874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.000 [2024-06-09 20:51:53.022541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.903 20:51:54 -- accel/accel.sh@18 -- # out=' 00:09:26.903 SPDK Configuration: 00:09:26.903 Core mask: 0x1 00:09:26.903 00:09:26.903 Accel Perf Configuration: 00:09:26.903 Workload Type: crc32c 00:09:26.903 CRC-32C seed: 32 00:09:26.903 Transfer size: 4096 bytes 00:09:26.903 Vector count 1 00:09:26.903 Module: software 00:09:26.903 Queue depth: 32 00:09:26.903 Allocate depth: 32 00:09:26.903 # threads/core: 1 00:09:26.903 Run time: 1 seconds 00:09:26.903 Verify: Yes 00:09:26.903 00:09:26.903 Running for 1 seconds... 00:09:26.903 00:09:26.903 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:26.903 ------------------------------------------------------------------------------------ 00:09:26.903 0,0 496064/s 1937 MiB/s 0 0 00:09:26.903 ==================================================================================== 00:09:26.903 Total 496064/s 1937 MiB/s 0 0' 00:09:26.903 20:51:54 -- accel/accel.sh@20 -- # IFS=: 00:09:26.903 20:51:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:26.903 20:51:54 -- accel/accel.sh@20 -- # read -r var val 00:09:26.903 20:51:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:26.903 20:51:54 -- accel/accel.sh@12 -- # build_accel_config 00:09:26.903 20:51:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:26.903 20:51:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.903 20:51:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.903 20:51:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:26.903 20:51:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:26.903 20:51:54 -- accel/accel.sh@41 -- # local IFS=, 00:09:26.903 20:51:54 -- accel/accel.sh@42 -- # jq -r . 00:09:26.903 [2024-06-09 20:51:55.010447] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:26.903 [2024-06-09 20:51:55.011363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105825 ] 00:09:27.162 [2024-06-09 20:51:55.160558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.421 [2024-06-09 20:51:55.370086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val= 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val= 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=0x1 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val= 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val= 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=crc32c 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=32 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val= 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=software 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@23 -- # accel_module=software 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=32 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=32 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=1 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val=Yes 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val= 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:27.421 20:51:55 -- accel/accel.sh@21 -- # val= 00:09:27.421 20:51:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # IFS=: 00:09:27.421 20:51:55 -- accel/accel.sh@20 -- # read -r var val 00:09:29.325 20:51:57 -- accel/accel.sh@21 -- # val= 00:09:29.325 20:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # IFS=: 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # read -r var val 00:09:29.325 20:51:57 -- accel/accel.sh@21 -- # val= 00:09:29.325 20:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # IFS=: 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # read -r var val 00:09:29.325 20:51:57 -- accel/accel.sh@21 -- # val= 00:09:29.325 20:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # IFS=: 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # read -r var val 00:09:29.325 20:51:57 -- accel/accel.sh@21 -- # val= 00:09:29.325 20:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # IFS=: 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # read -r var val 00:09:29.325 20:51:57 -- accel/accel.sh@21 -- # val= 00:09:29.325 20:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # IFS=: 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # read -r var val 00:09:29.325 20:51:57 -- accel/accel.sh@21 -- # val= 00:09:29.325 20:51:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # IFS=: 00:09:29.325 20:51:57 -- accel/accel.sh@20 -- # read -r var val 00:09:29.325 20:51:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:29.325 20:51:57 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:29.325 20:51:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.325 00:09:29.325 real 0m4.797s 00:09:29.325 user 0m4.219s 00:09:29.325 sys 0m0.418s 00:09:29.325 20:51:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.325 20:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:29.325 ************************************ 00:09:29.325 END TEST accel_crc32c 00:09:29.325 ************************************ 00:09:29.325 20:51:57 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:29.325 20:51:57 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:29.325 20:51:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:29.325 20:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:29.325 ************************************ 00:09:29.325 START TEST accel_crc32c_C2 00:09:29.325 ************************************ 00:09:29.325 20:51:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:29.325 20:51:57 -- accel/accel.sh@16 -- # local accel_opc 00:09:29.325 20:51:57 -- accel/accel.sh@17 -- # local accel_module 00:09:29.325 20:51:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:29.325 20:51:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:29.325 20:51:57 -- accel/accel.sh@12 -- # build_accel_config 00:09:29.325 20:51:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:29.325 20:51:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.325 20:51:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.325 20:51:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:29.325 20:51:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:29.325 20:51:57 -- accel/accel.sh@41 -- # local IFS=, 00:09:29.325 20:51:57 -- accel/accel.sh@42 -- # jq -r . 00:09:29.325 [2024-06-09 20:51:57.438312] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:29.325 [2024-06-09 20:51:57.438967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105877 ] 00:09:29.583 [2024-06-09 20:51:57.593626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.842 [2024-06-09 20:51:57.772629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.745 20:51:59 -- accel/accel.sh@18 -- # out=' 00:09:31.745 SPDK Configuration: 00:09:31.745 Core mask: 0x1 00:09:31.745 00:09:31.745 Accel Perf Configuration: 00:09:31.745 Workload Type: crc32c 00:09:31.745 CRC-32C seed: 0 00:09:31.745 Transfer size: 4096 bytes 00:09:31.745 Vector count 2 00:09:31.745 Module: software 00:09:31.745 Queue depth: 32 00:09:31.745 Allocate depth: 32 00:09:31.745 # threads/core: 1 00:09:31.745 Run time: 1 seconds 00:09:31.745 Verify: Yes 00:09:31.745 00:09:31.745 Running for 1 seconds... 00:09:31.745 00:09:31.745 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:31.746 ------------------------------------------------------------------------------------ 00:09:31.746 0,0 369408/s 2886 MiB/s 0 0 00:09:31.746 ==================================================================================== 00:09:31.746 Total 369408/s 1443 MiB/s 0 0' 00:09:31.746 20:51:59 -- accel/accel.sh@20 -- # IFS=: 00:09:31.746 20:51:59 -- accel/accel.sh@20 -- # read -r var val 00:09:31.746 20:51:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:31.746 20:51:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:31.746 20:51:59 -- accel/accel.sh@12 -- # build_accel_config 00:09:31.746 20:51:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:31.746 20:51:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:31.746 20:51:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:31.746 20:51:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:31.746 20:51:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:31.746 20:51:59 -- accel/accel.sh@41 -- # local IFS=, 00:09:31.746 20:51:59 -- accel/accel.sh@42 -- # jq -r . 00:09:31.746 [2024-06-09 20:51:59.780792] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:31.746 [2024-06-09 20:51:59.781709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105912 ] 00:09:32.005 [2024-06-09 20:51:59.938529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.005 [2024-06-09 20:52:00.132025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val= 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val= 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=0x1 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val= 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val= 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=crc32c 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=0 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val= 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=software 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@23 -- # accel_module=software 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=32 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=32 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=1 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val=Yes 00:09:32.302 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.302 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.302 20:52:00 -- accel/accel.sh@21 -- # val= 00:09:32.303 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.303 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.303 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:32.303 20:52:00 -- accel/accel.sh@21 -- # val= 00:09:32.303 20:52:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.303 20:52:00 -- accel/accel.sh@20 -- # IFS=: 00:09:32.303 20:52:00 -- accel/accel.sh@20 -- # read -r var val 00:09:34.218 20:52:02 -- accel/accel.sh@21 -- # val= 00:09:34.218 20:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # IFS=: 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # read -r var val 00:09:34.218 20:52:02 -- accel/accel.sh@21 -- # val= 00:09:34.218 20:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # IFS=: 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # read -r var val 00:09:34.218 20:52:02 -- accel/accel.sh@21 -- # val= 00:09:34.218 20:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # IFS=: 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # read -r var val 00:09:34.218 20:52:02 -- accel/accel.sh@21 -- # val= 00:09:34.218 20:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # IFS=: 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # read -r var val 00:09:34.218 20:52:02 -- accel/accel.sh@21 -- # val= 00:09:34.218 20:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.218 20:52:02 -- accel/accel.sh@20 -- # IFS=: 00:09:34.219 20:52:02 -- accel/accel.sh@20 -- # read -r var val 00:09:34.219 20:52:02 -- accel/accel.sh@21 -- # val= 00:09:34.219 20:52:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:34.219 20:52:02 -- accel/accel.sh@20 -- # IFS=: 00:09:34.219 20:52:02 -- accel/accel.sh@20 -- # read -r var val 00:09:34.219 20:52:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:34.219 20:52:02 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:34.219 20:52:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:34.219 00:09:34.219 real 0m4.745s 00:09:34.219 user 0m4.209s 00:09:34.219 sys 0m0.377s 00:09:34.219 20:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.219 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:09:34.219 ************************************ 00:09:34.219 END TEST accel_crc32c_C2 00:09:34.219 ************************************ 00:09:34.219 20:52:02 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:34.219 20:52:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:34.219 20:52:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:34.219 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:09:34.219 ************************************ 00:09:34.219 START TEST accel_copy 00:09:34.219 ************************************ 00:09:34.219 20:52:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:09:34.219 20:52:02 -- accel/accel.sh@16 -- # local accel_opc 00:09:34.219 20:52:02 -- accel/accel.sh@17 -- # local accel_module 00:09:34.219 20:52:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:34.219 20:52:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:34.219 20:52:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:34.219 20:52:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:34.219 20:52:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:34.219 20:52:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:34.219 20:52:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:34.219 20:52:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:34.219 20:52:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:34.219 20:52:02 -- accel/accel.sh@42 -- # jq -r . 00:09:34.219 [2024-06-09 20:52:02.238890] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:34.219 [2024-06-09 20:52:02.239086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105957 ] 00:09:34.477 [2024-06-09 20:52:02.396918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.477 [2024-06-09 20:52:02.599006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.010 20:52:04 -- accel/accel.sh@18 -- # out=' 00:09:37.010 SPDK Configuration: 00:09:37.010 Core mask: 0x1 00:09:37.010 00:09:37.010 Accel Perf Configuration: 00:09:37.010 Workload Type: copy 00:09:37.010 Transfer size: 4096 bytes 00:09:37.010 Vector count 1 00:09:37.010 Module: software 00:09:37.010 Queue depth: 32 00:09:37.010 Allocate depth: 32 00:09:37.010 # threads/core: 1 00:09:37.010 Run time: 1 seconds 00:09:37.010 Verify: Yes 00:09:37.010 00:09:37.010 Running for 1 seconds... 00:09:37.010 00:09:37.010 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:37.010 ------------------------------------------------------------------------------------ 00:09:37.010 0,0 281600/s 1100 MiB/s 0 0 00:09:37.010 ==================================================================================== 00:09:37.010 Total 281600/s 1100 MiB/s 0 0' 00:09:37.010 20:52:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:37.010 20:52:04 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:04 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:37.010 20:52:04 -- accel/accel.sh@12 -- # build_accel_config 00:09:37.010 20:52:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:37.010 20:52:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.010 20:52:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.010 20:52:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:37.010 20:52:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:37.010 20:52:04 -- accel/accel.sh@41 -- # local IFS=, 00:09:37.010 20:52:04 -- accel/accel.sh@42 -- # jq -r . 00:09:37.010 [2024-06-09 20:52:04.626827] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:37.010 [2024-06-09 20:52:04.627009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106007 ] 00:09:37.010 [2024-06-09 20:52:04.781434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.010 [2024-06-09 20:52:04.978615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val= 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val= 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val=0x1 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val= 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val= 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val=copy 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val= 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val=software 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@23 -- # accel_module=software 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val=32 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val=32 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.010 20:52:05 -- accel/accel.sh@21 -- # val=1 00:09:37.010 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.010 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.267 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.267 20:52:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:37.267 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.267 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.267 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.267 20:52:05 -- accel/accel.sh@21 -- # val=Yes 00:09:37.267 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.267 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.267 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.267 20:52:05 -- accel/accel.sh@21 -- # val= 00:09:37.268 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.268 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.268 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:37.268 20:52:05 -- accel/accel.sh@21 -- # val= 00:09:37.268 20:52:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.268 20:52:05 -- accel/accel.sh@20 -- # IFS=: 00:09:37.268 20:52:05 -- accel/accel.sh@20 -- # read -r var val 00:09:39.170 20:52:06 -- accel/accel.sh@21 -- # val= 00:09:39.170 20:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # IFS=: 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # read -r var val 00:09:39.170 20:52:06 -- accel/accel.sh@21 -- # val= 00:09:39.170 20:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # IFS=: 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # read -r var val 00:09:39.170 20:52:06 -- accel/accel.sh@21 -- # val= 00:09:39.170 20:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # IFS=: 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # read -r var val 00:09:39.170 20:52:06 -- accel/accel.sh@21 -- # val= 00:09:39.170 20:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # IFS=: 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # read -r var val 00:09:39.170 20:52:06 -- accel/accel.sh@21 -- # val= 00:09:39.170 20:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # IFS=: 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # read -r var val 00:09:39.170 20:52:06 -- accel/accel.sh@21 -- # val= 00:09:39.170 20:52:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # IFS=: 00:09:39.170 20:52:06 -- accel/accel.sh@20 -- # read -r var val 00:09:39.170 20:52:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:39.170 20:52:06 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:39.170 20:52:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.170 00:09:39.170 real 0m4.756s 00:09:39.170 user 0m4.258s 00:09:39.170 sys 0m0.342s 00:09:39.170 20:52:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.170 20:52:06 -- common/autotest_common.sh@10 -- # set +x 00:09:39.170 ************************************ 00:09:39.170 END TEST accel_copy 00:09:39.170 ************************************ 00:09:39.170 20:52:06 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.170 20:52:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:39.170 20:52:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:39.170 20:52:06 -- common/autotest_common.sh@10 -- # set +x 00:09:39.170 ************************************ 00:09:39.170 START TEST accel_fill 00:09:39.170 ************************************ 00:09:39.170 20:52:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.170 20:52:07 -- accel/accel.sh@16 -- # local accel_opc 00:09:39.170 20:52:07 -- accel/accel.sh@17 -- # local accel_module 00:09:39.170 20:52:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.170 20:52:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.170 20:52:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:39.170 20:52:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:39.170 20:52:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.170 20:52:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.170 20:52:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:39.170 20:52:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:39.170 20:52:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:39.170 20:52:07 -- accel/accel.sh@42 -- # jq -r . 00:09:39.170 [2024-06-09 20:52:07.057705] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:39.170 [2024-06-09 20:52:07.057904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106052 ] 00:09:39.170 [2024-06-09 20:52:07.228556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.429 [2024-06-09 20:52:07.422467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.332 20:52:09 -- accel/accel.sh@18 -- # out=' 00:09:41.332 SPDK Configuration: 00:09:41.332 Core mask: 0x1 00:09:41.332 00:09:41.332 Accel Perf Configuration: 00:09:41.332 Workload Type: fill 00:09:41.332 Fill pattern: 0x80 00:09:41.332 Transfer size: 4096 bytes 00:09:41.332 Vector count 1 00:09:41.332 Module: software 00:09:41.332 Queue depth: 64 00:09:41.332 Allocate depth: 64 00:09:41.332 # threads/core: 1 00:09:41.332 Run time: 1 seconds 00:09:41.332 Verify: Yes 00:09:41.332 00:09:41.332 Running for 1 seconds... 00:09:41.332 00:09:41.332 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:41.332 ------------------------------------------------------------------------------------ 00:09:41.332 0,0 447104/s 1746 MiB/s 0 0 00:09:41.332 ==================================================================================== 00:09:41.332 Total 447104/s 1746 MiB/s 0 0' 00:09:41.332 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.332 20:52:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:41.332 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.332 20:52:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:41.332 20:52:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:41.332 20:52:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:41.332 20:52:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.332 20:52:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.332 20:52:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:41.332 20:52:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:41.332 20:52:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:41.332 20:52:09 -- accel/accel.sh@42 -- # jq -r . 00:09:41.332 [2024-06-09 20:52:09.427091] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:41.332 [2024-06-09 20:52:09.427275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106087 ] 00:09:41.592 [2024-06-09 20:52:09.596959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.851 [2024-06-09 20:52:09.787259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val= 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val= 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=0x1 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val= 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val= 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=fill 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@24 -- # accel_opc=fill 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=0x80 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val= 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=software 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@23 -- # accel_module=software 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=64 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=64 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=1 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val=Yes 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val= 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:41.851 20:52:09 -- accel/accel.sh@21 -- # val= 00:09:41.851 20:52:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # IFS=: 00:09:41.851 20:52:09 -- accel/accel.sh@20 -- # read -r var val 00:09:43.755 20:52:11 -- accel/accel.sh@21 -- # val= 00:09:43.755 20:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # IFS=: 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # read -r var val 00:09:43.755 20:52:11 -- accel/accel.sh@21 -- # val= 00:09:43.755 20:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # IFS=: 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # read -r var val 00:09:43.755 20:52:11 -- accel/accel.sh@21 -- # val= 00:09:43.755 20:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # IFS=: 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # read -r var val 00:09:43.755 20:52:11 -- accel/accel.sh@21 -- # val= 00:09:43.755 20:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # IFS=: 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # read -r var val 00:09:43.755 20:52:11 -- accel/accel.sh@21 -- # val= 00:09:43.755 20:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # IFS=: 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # read -r var val 00:09:43.755 20:52:11 -- accel/accel.sh@21 -- # val= 00:09:43.755 20:52:11 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # IFS=: 00:09:43.755 20:52:11 -- accel/accel.sh@20 -- # read -r var val 00:09:43.755 20:52:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:43.755 20:52:11 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:09:43.755 20:52:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:43.755 00:09:43.755 real 0m4.674s 00:09:43.755 user 0m4.123s 00:09:43.755 sys 0m0.395s 00:09:43.755 20:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.755 20:52:11 -- common/autotest_common.sh@10 -- # set +x 00:09:43.755 ************************************ 00:09:43.755 END TEST accel_fill 00:09:43.755 ************************************ 00:09:43.755 20:52:11 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:43.755 20:52:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:43.755 20:52:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:43.755 20:52:11 -- common/autotest_common.sh@10 -- # set +x 00:09:43.755 ************************************ 00:09:43.755 START TEST accel_copy_crc32c 00:09:43.755 ************************************ 00:09:43.755 20:52:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:09:43.755 20:52:11 -- accel/accel.sh@16 -- # local accel_opc 00:09:43.755 20:52:11 -- accel/accel.sh@17 -- # local accel_module 00:09:43.755 20:52:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:43.755 20:52:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:43.755 20:52:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:43.755 20:52:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:43.755 20:52:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:43.755 20:52:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:43.755 20:52:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:43.755 20:52:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:43.755 20:52:11 -- accel/accel.sh@41 -- # local IFS=, 00:09:43.755 20:52:11 -- accel/accel.sh@42 -- # jq -r . 00:09:43.755 [2024-06-09 20:52:11.768465] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:43.755 [2024-06-09 20:52:11.768780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106139 ] 00:09:43.755 [2024-06-09 20:52:11.918702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.014 [2024-06-09 20:52:12.075771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.936 20:52:13 -- accel/accel.sh@18 -- # out=' 00:09:45.936 SPDK Configuration: 00:09:45.936 Core mask: 0x1 00:09:45.936 00:09:45.936 Accel Perf Configuration: 00:09:45.936 Workload Type: copy_crc32c 00:09:45.936 CRC-32C seed: 0 00:09:45.936 Vector size: 4096 bytes 00:09:45.936 Transfer size: 4096 bytes 00:09:45.936 Vector count 1 00:09:45.936 Module: software 00:09:45.936 Queue depth: 32 00:09:45.936 Allocate depth: 32 00:09:45.936 # threads/core: 1 00:09:45.936 Run time: 1 seconds 00:09:45.936 Verify: Yes 00:09:45.936 00:09:45.936 Running for 1 seconds... 00:09:45.936 00:09:45.936 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:45.936 ------------------------------------------------------------------------------------ 00:09:45.936 0,0 247360/s 966 MiB/s 0 0 00:09:45.936 ==================================================================================== 00:09:45.936 Total 247360/s 966 MiB/s 0 0' 00:09:45.936 20:52:13 -- accel/accel.sh@20 -- # IFS=: 00:09:45.936 20:52:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:45.936 20:52:13 -- accel/accel.sh@20 -- # read -r var val 00:09:45.936 20:52:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:45.936 20:52:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:45.936 20:52:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:45.936 20:52:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:45.936 20:52:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:45.936 20:52:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:45.936 20:52:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:45.936 20:52:13 -- accel/accel.sh@41 -- # local IFS=, 00:09:45.936 20:52:13 -- accel/accel.sh@42 -- # jq -r . 00:09:45.936 [2024-06-09 20:52:14.000678] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:45.936 [2024-06-09 20:52:14.000924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106176 ] 00:09:46.195 [2024-06-09 20:52:14.151880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.195 [2024-06-09 20:52:14.342651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val= 00:09:46.453 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val= 00:09:46.453 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val=0x1 00:09:46.453 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val= 00:09:46.453 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val= 00:09:46.453 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:46.453 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.453 20:52:14 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val=0 00:09:46.453 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.453 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.453 20:52:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val= 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val=software 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@23 -- # accel_module=software 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val=32 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val=32 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val=1 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val=Yes 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val= 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:46.454 20:52:14 -- accel/accel.sh@21 -- # val= 00:09:46.454 20:52:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # IFS=: 00:09:46.454 20:52:14 -- accel/accel.sh@20 -- # read -r var val 00:09:48.356 20:52:16 -- accel/accel.sh@21 -- # val= 00:09:48.356 20:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # IFS=: 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # read -r var val 00:09:48.356 20:52:16 -- accel/accel.sh@21 -- # val= 00:09:48.356 20:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # IFS=: 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # read -r var val 00:09:48.356 20:52:16 -- accel/accel.sh@21 -- # val= 00:09:48.356 20:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # IFS=: 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # read -r var val 00:09:48.356 20:52:16 -- accel/accel.sh@21 -- # val= 00:09:48.356 20:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # IFS=: 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # read -r var val 00:09:48.356 20:52:16 -- accel/accel.sh@21 -- # val= 00:09:48.356 20:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # IFS=: 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # read -r var val 00:09:48.356 20:52:16 -- accel/accel.sh@21 -- # val= 00:09:48.356 20:52:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # IFS=: 00:09:48.356 20:52:16 -- accel/accel.sh@20 -- # read -r var val 00:09:48.356 20:52:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:48.356 20:52:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:48.356 20:52:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:48.356 00:09:48.356 real 0m4.504s 00:09:48.356 user 0m3.998s 00:09:48.356 sys 0m0.342s 00:09:48.356 20:52:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.356 20:52:16 -- common/autotest_common.sh@10 -- # set +x 00:09:48.356 ************************************ 00:09:48.356 END TEST accel_copy_crc32c 00:09:48.356 ************************************ 00:09:48.356 20:52:16 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:48.356 20:52:16 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:48.356 20:52:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.356 20:52:16 -- common/autotest_common.sh@10 -- # set +x 00:09:48.356 ************************************ 00:09:48.356 START TEST accel_copy_crc32c_C2 00:09:48.356 ************************************ 00:09:48.356 20:52:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:48.356 20:52:16 -- accel/accel.sh@16 -- # local accel_opc 00:09:48.356 20:52:16 -- accel/accel.sh@17 -- # local accel_module 00:09:48.356 20:52:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:48.356 20:52:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:48.356 20:52:16 -- accel/accel.sh@12 -- # build_accel_config 00:09:48.356 20:52:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:48.356 20:52:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:48.356 20:52:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:48.356 20:52:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:48.356 20:52:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:48.356 20:52:16 -- accel/accel.sh@41 -- # local IFS=, 00:09:48.356 20:52:16 -- accel/accel.sh@42 -- # jq -r . 00:09:48.356 [2024-06-09 20:52:16.329990] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:48.356 [2024-06-09 20:52:16.330180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106221 ] 00:09:48.356 [2024-06-09 20:52:16.497720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.615 [2024-06-09 20:52:16.685326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.524 20:52:18 -- accel/accel.sh@18 -- # out=' 00:09:50.524 SPDK Configuration: 00:09:50.524 Core mask: 0x1 00:09:50.524 00:09:50.524 Accel Perf Configuration: 00:09:50.524 Workload Type: copy_crc32c 00:09:50.524 CRC-32C seed: 0 00:09:50.524 Vector size: 4096 bytes 00:09:50.524 Transfer size: 8192 bytes 00:09:50.524 Vector count 2 00:09:50.524 Module: software 00:09:50.524 Queue depth: 32 00:09:50.524 Allocate depth: 32 00:09:50.524 # threads/core: 1 00:09:50.524 Run time: 1 seconds 00:09:50.524 Verify: Yes 00:09:50.524 00:09:50.524 Running for 1 seconds... 00:09:50.524 00:09:50.524 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:50.524 ------------------------------------------------------------------------------------ 00:09:50.524 0,0 180352/s 1409 MiB/s 0 0 00:09:50.524 ==================================================================================== 00:09:50.524 Total 180352/s 704 MiB/s 0 0' 00:09:50.524 20:52:18 -- accel/accel.sh@20 -- # IFS=: 00:09:50.524 20:52:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:50.524 20:52:18 -- accel/accel.sh@20 -- # read -r var val 00:09:50.524 20:52:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:50.524 20:52:18 -- accel/accel.sh@12 -- # build_accel_config 00:09:50.524 20:52:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:50.524 20:52:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.524 20:52:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.524 20:52:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:50.524 20:52:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:50.524 20:52:18 -- accel/accel.sh@41 -- # local IFS=, 00:09:50.524 20:52:18 -- accel/accel.sh@42 -- # jq -r . 00:09:50.524 [2024-06-09 20:52:18.631847] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:50.524 [2024-06-09 20:52:18.632040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106256 ] 00:09:50.782 [2024-06-09 20:52:18.797267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.041 [2024-06-09 20:52:18.981512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val= 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val= 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=0x1 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val= 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val= 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=0 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val='8192 bytes' 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val= 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=software 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@23 -- # accel_module=software 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=32 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=32 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=1 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val=Yes 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val= 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:51.041 20:52:19 -- accel/accel.sh@21 -- # val= 00:09:51.041 20:52:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # IFS=: 00:09:51.041 20:52:19 -- accel/accel.sh@20 -- # read -r var val 00:09:52.943 20:52:20 -- accel/accel.sh@21 -- # val= 00:09:52.943 20:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # IFS=: 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # read -r var val 00:09:52.943 20:52:20 -- accel/accel.sh@21 -- # val= 00:09:52.943 20:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # IFS=: 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # read -r var val 00:09:52.943 20:52:20 -- accel/accel.sh@21 -- # val= 00:09:52.943 20:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # IFS=: 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # read -r var val 00:09:52.943 20:52:20 -- accel/accel.sh@21 -- # val= 00:09:52.943 20:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # IFS=: 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # read -r var val 00:09:52.943 20:52:20 -- accel/accel.sh@21 -- # val= 00:09:52.943 20:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # IFS=: 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # read -r var val 00:09:52.943 20:52:20 -- accel/accel.sh@21 -- # val= 00:09:52.943 20:52:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # IFS=: 00:09:52.943 20:52:20 -- accel/accel.sh@20 -- # read -r var val 00:09:52.943 20:52:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:52.943 20:52:20 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:52.943 20:52:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:52.943 00:09:52.943 real 0m4.610s 00:09:52.943 user 0m4.111s 00:09:52.943 sys 0m0.335s 00:09:52.943 20:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.943 20:52:20 -- common/autotest_common.sh@10 -- # set +x 00:09:52.943 ************************************ 00:09:52.943 END TEST accel_copy_crc32c_C2 00:09:52.943 ************************************ 00:09:52.943 20:52:20 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:52.943 20:52:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:52.943 20:52:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.943 20:52:20 -- common/autotest_common.sh@10 -- # set +x 00:09:52.943 ************************************ 00:09:52.943 START TEST accel_dualcast 00:09:52.943 ************************************ 00:09:52.943 20:52:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:09:52.943 20:52:20 -- accel/accel.sh@16 -- # local accel_opc 00:09:52.943 20:52:20 -- accel/accel.sh@17 -- # local accel_module 00:09:52.943 20:52:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:09:52.943 20:52:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:52.943 20:52:20 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.943 20:52:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.943 20:52:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.943 20:52:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.943 20:52:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.943 20:52:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.943 20:52:20 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.943 20:52:20 -- accel/accel.sh@42 -- # jq -r . 00:09:52.943 [2024-06-09 20:52:20.994961] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:52.943 [2024-06-09 20:52:20.995299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106313 ] 00:09:53.201 [2024-06-09 20:52:21.150133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.201 [2024-06-09 20:52:21.309847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.103 20:52:23 -- accel/accel.sh@18 -- # out=' 00:09:55.103 SPDK Configuration: 00:09:55.104 Core mask: 0x1 00:09:55.104 00:09:55.104 Accel Perf Configuration: 00:09:55.104 Workload Type: dualcast 00:09:55.104 Transfer size: 4096 bytes 00:09:55.104 Vector count 1 00:09:55.104 Module: software 00:09:55.104 Queue depth: 32 00:09:55.104 Allocate depth: 32 00:09:55.104 # threads/core: 1 00:09:55.104 Run time: 1 seconds 00:09:55.104 Verify: Yes 00:09:55.104 00:09:55.104 Running for 1 seconds... 00:09:55.104 00:09:55.104 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:55.104 ------------------------------------------------------------------------------------ 00:09:55.104 0,0 332640/s 1299 MiB/s 0 0 00:09:55.104 ==================================================================================== 00:09:55.104 Total 332640/s 1299 MiB/s 0 0' 00:09:55.104 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.104 20:52:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:55.104 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.104 20:52:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:55.104 20:52:23 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.104 20:52:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.104 20:52:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.104 20:52:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.104 20:52:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.104 20:52:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.104 20:52:23 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.104 20:52:23 -- accel/accel.sh@42 -- # jq -r . 00:09:55.104 [2024-06-09 20:52:23.253871] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:55.104 [2024-06-09 20:52:23.254159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106348 ] 00:09:55.362 [2024-06-09 20:52:23.431359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.620 [2024-06-09 20:52:23.676435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.878 20:52:23 -- accel/accel.sh@21 -- # val= 00:09:55.878 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.878 20:52:23 -- accel/accel.sh@21 -- # val= 00:09:55.878 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.878 20:52:23 -- accel/accel.sh@21 -- # val=0x1 00:09:55.878 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.878 20:52:23 -- accel/accel.sh@21 -- # val= 00:09:55.878 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.878 20:52:23 -- accel/accel.sh@21 -- # val= 00:09:55.878 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.878 20:52:23 -- accel/accel.sh@21 -- # val=dualcast 00:09:55.878 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.878 20:52:23 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.878 20:52:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:55.878 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.878 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val= 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val=software 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@23 -- # accel_module=software 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val=32 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val=32 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val=1 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val=Yes 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val= 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:55.879 20:52:23 -- accel/accel.sh@21 -- # val= 00:09:55.879 20:52:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # IFS=: 00:09:55.879 20:52:23 -- accel/accel.sh@20 -- # read -r var val 00:09:57.782 20:52:25 -- accel/accel.sh@21 -- # val= 00:09:57.782 20:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # IFS=: 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # read -r var val 00:09:57.782 20:52:25 -- accel/accel.sh@21 -- # val= 00:09:57.782 20:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # IFS=: 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # read -r var val 00:09:57.782 20:52:25 -- accel/accel.sh@21 -- # val= 00:09:57.782 20:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # IFS=: 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # read -r var val 00:09:57.782 20:52:25 -- accel/accel.sh@21 -- # val= 00:09:57.782 20:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # IFS=: 00:09:57.782 20:52:25 -- accel/accel.sh@20 -- # read -r var val 00:09:57.783 20:52:25 -- accel/accel.sh@21 -- # val= 00:09:57.783 20:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.783 20:52:25 -- accel/accel.sh@20 -- # IFS=: 00:09:57.783 20:52:25 -- accel/accel.sh@20 -- # read -r var val 00:09:57.783 20:52:25 -- accel/accel.sh@21 -- # val= 00:09:57.783 20:52:25 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.783 20:52:25 -- accel/accel.sh@20 -- # IFS=: 00:09:57.783 20:52:25 -- accel/accel.sh@20 -- # read -r var val 00:09:57.783 20:52:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:57.783 20:52:25 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:09:57.783 20:52:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:57.783 00:09:57.783 real 0m4.625s 00:09:57.783 user 0m4.103s 00:09:57.783 sys 0m0.378s 00:09:57.783 20:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.783 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:09:57.783 ************************************ 00:09:57.783 END TEST accel_dualcast 00:09:57.783 ************************************ 00:09:57.783 20:52:25 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:57.783 20:52:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:57.783 20:52:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.783 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:09:57.783 ************************************ 00:09:57.783 START TEST accel_compare 00:09:57.783 ************************************ 00:09:57.783 20:52:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:09:57.783 20:52:25 -- accel/accel.sh@16 -- # local accel_opc 00:09:57.783 20:52:25 -- accel/accel.sh@17 -- # local accel_module 00:09:57.783 20:52:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:09:57.783 20:52:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:57.783 20:52:25 -- accel/accel.sh@12 -- # build_accel_config 00:09:57.783 20:52:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.783 20:52:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.783 20:52:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.783 20:52:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.783 20:52:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.783 20:52:25 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.783 20:52:25 -- accel/accel.sh@42 -- # jq -r . 00:09:57.783 [2024-06-09 20:52:25.672470] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:57.783 [2024-06-09 20:52:25.672662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106395 ] 00:09:57.783 [2024-06-09 20:52:25.839586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.041 [2024-06-09 20:52:26.013118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.942 20:52:27 -- accel/accel.sh@18 -- # out=' 00:09:59.942 SPDK Configuration: 00:09:59.942 Core mask: 0x1 00:09:59.942 00:09:59.942 Accel Perf Configuration: 00:09:59.942 Workload Type: compare 00:09:59.942 Transfer size: 4096 bytes 00:09:59.942 Vector count 1 00:09:59.942 Module: software 00:09:59.942 Queue depth: 32 00:09:59.942 Allocate depth: 32 00:09:59.942 # threads/core: 1 00:09:59.942 Run time: 1 seconds 00:09:59.942 Verify: Yes 00:09:59.942 00:09:59.942 Running for 1 seconds... 00:09:59.942 00:09:59.942 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:59.942 ------------------------------------------------------------------------------------ 00:09:59.942 0,0 463776/s 1811 MiB/s 0 0 00:09:59.942 ==================================================================================== 00:09:59.942 Total 463776/s 1811 MiB/s 0 0' 00:09:59.942 20:52:27 -- accel/accel.sh@20 -- # IFS=: 00:09:59.942 20:52:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:59.942 20:52:27 -- accel/accel.sh@20 -- # read -r var val 00:09:59.942 20:52:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:59.942 20:52:27 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.942 20:52:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.942 20:52:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.942 20:52:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.942 20:52:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.942 20:52:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.942 20:52:27 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.942 20:52:27 -- accel/accel.sh@42 -- # jq -r . 00:09:59.942 [2024-06-09 20:52:27.952091] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:59.942 [2024-06-09 20:52:27.952303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106432 ] 00:10:00.200 [2024-06-09 20:52:28.120117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.200 [2024-06-09 20:52:28.307941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val= 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val= 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val=0x1 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val= 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val= 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val=compare 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val= 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val=software 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@23 -- # accel_module=software 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val=32 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val=32 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val=1 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val=Yes 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val= 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:00.514 20:52:28 -- accel/accel.sh@21 -- # val= 00:10:00.514 20:52:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # IFS=: 00:10:00.514 20:52:28 -- accel/accel.sh@20 -- # read -r var val 00:10:02.418 20:52:30 -- accel/accel.sh@21 -- # val= 00:10:02.418 20:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # IFS=: 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # read -r var val 00:10:02.418 20:52:30 -- accel/accel.sh@21 -- # val= 00:10:02.418 20:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # IFS=: 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # read -r var val 00:10:02.418 20:52:30 -- accel/accel.sh@21 -- # val= 00:10:02.418 20:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # IFS=: 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # read -r var val 00:10:02.418 20:52:30 -- accel/accel.sh@21 -- # val= 00:10:02.418 20:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # IFS=: 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # read -r var val 00:10:02.418 20:52:30 -- accel/accel.sh@21 -- # val= 00:10:02.418 20:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # IFS=: 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # read -r var val 00:10:02.418 20:52:30 -- accel/accel.sh@21 -- # val= 00:10:02.418 20:52:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # IFS=: 00:10:02.418 20:52:30 -- accel/accel.sh@20 -- # read -r var val 00:10:02.418 20:52:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:02.419 20:52:30 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:02.419 20:52:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:02.419 00:10:02.419 real 0m4.582s 00:10:02.419 user 0m4.061s 00:10:02.419 sys 0m0.350s 00:10:02.419 20:52:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.419 20:52:30 -- common/autotest_common.sh@10 -- # set +x 00:10:02.419 ************************************ 00:10:02.419 END TEST accel_compare 00:10:02.419 ************************************ 00:10:02.419 20:52:30 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:02.419 20:52:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:02.419 20:52:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.419 20:52:30 -- common/autotest_common.sh@10 -- # set +x 00:10:02.419 ************************************ 00:10:02.419 START TEST accel_xor 00:10:02.419 ************************************ 00:10:02.419 20:52:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:02.419 20:52:30 -- accel/accel.sh@16 -- # local accel_opc 00:10:02.419 20:52:30 -- accel/accel.sh@17 -- # local accel_module 00:10:02.419 20:52:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:02.419 20:52:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:02.419 20:52:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.419 20:52:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.419 20:52:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.419 20:52:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.419 20:52:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.419 20:52:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.419 20:52:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.419 20:52:30 -- accel/accel.sh@42 -- # jq -r . 00:10:02.419 [2024-06-09 20:52:30.316510] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:02.419 [2024-06-09 20:52:30.316699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106482 ] 00:10:02.419 [2024-06-09 20:52:30.486417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.679 [2024-06-09 20:52:30.661352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.585 20:52:32 -- accel/accel.sh@18 -- # out=' 00:10:04.585 SPDK Configuration: 00:10:04.585 Core mask: 0x1 00:10:04.585 00:10:04.585 Accel Perf Configuration: 00:10:04.585 Workload Type: xor 00:10:04.585 Source buffers: 2 00:10:04.585 Transfer size: 4096 bytes 00:10:04.585 Vector count 1 00:10:04.585 Module: software 00:10:04.585 Queue depth: 32 00:10:04.585 Allocate depth: 32 00:10:04.585 # threads/core: 1 00:10:04.585 Run time: 1 seconds 00:10:04.585 Verify: Yes 00:10:04.585 00:10:04.585 Running for 1 seconds... 00:10:04.585 00:10:04.585 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:04.585 ------------------------------------------------------------------------------------ 00:10:04.585 0,0 255584/s 998 MiB/s 0 0 00:10:04.585 ==================================================================================== 00:10:04.585 Total 255584/s 998 MiB/s 0 0' 00:10:04.585 20:52:32 -- accel/accel.sh@20 -- # IFS=: 00:10:04.585 20:52:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:04.585 20:52:32 -- accel/accel.sh@20 -- # read -r var val 00:10:04.585 20:52:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:04.585 20:52:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.585 20:52:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.585 20:52:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.585 20:52:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.585 20:52:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.585 20:52:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.585 20:52:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.585 20:52:32 -- accel/accel.sh@42 -- # jq -r . 00:10:04.585 [2024-06-09 20:52:32.585369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:04.585 [2024-06-09 20:52:32.585600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106517 ] 00:10:04.585 [2024-06-09 20:52:32.752237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.844 [2024-06-09 20:52:32.926922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.103 20:52:33 -- accel/accel.sh@21 -- # val= 00:10:05.103 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.103 20:52:33 -- accel/accel.sh@21 -- # val= 00:10:05.103 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.103 20:52:33 -- accel/accel.sh@21 -- # val=0x1 00:10:05.103 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.103 20:52:33 -- accel/accel.sh@21 -- # val= 00:10:05.103 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.103 20:52:33 -- accel/accel.sh@21 -- # val= 00:10:05.103 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.103 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val=xor 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val=2 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val= 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val=software 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@23 -- # accel_module=software 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val=32 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val=32 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val=1 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val=Yes 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val= 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.104 20:52:33 -- accel/accel.sh@21 -- # val= 00:10:05.104 20:52:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # IFS=: 00:10:05.104 20:52:33 -- accel/accel.sh@20 -- # read -r var val 00:10:07.009 20:52:34 -- accel/accel.sh@21 -- # val= 00:10:07.009 20:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # IFS=: 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # read -r var val 00:10:07.009 20:52:34 -- accel/accel.sh@21 -- # val= 00:10:07.009 20:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # IFS=: 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # read -r var val 00:10:07.009 20:52:34 -- accel/accel.sh@21 -- # val= 00:10:07.009 20:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # IFS=: 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # read -r var val 00:10:07.009 20:52:34 -- accel/accel.sh@21 -- # val= 00:10:07.009 20:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # IFS=: 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # read -r var val 00:10:07.009 20:52:34 -- accel/accel.sh@21 -- # val= 00:10:07.009 20:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # IFS=: 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # read -r var val 00:10:07.009 20:52:34 -- accel/accel.sh@21 -- # val= 00:10:07.009 20:52:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # IFS=: 00:10:07.009 20:52:34 -- accel/accel.sh@20 -- # read -r var val 00:10:07.009 20:52:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:07.009 20:52:34 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:07.009 20:52:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.009 00:10:07.009 real 0m4.551s 00:10:07.009 user 0m3.982s 00:10:07.009 sys 0m0.410s 00:10:07.009 20:52:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.009 20:52:34 -- common/autotest_common.sh@10 -- # set +x 00:10:07.009 ************************************ 00:10:07.009 END TEST accel_xor 00:10:07.009 ************************************ 00:10:07.009 20:52:34 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:07.009 20:52:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:07.009 20:52:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.009 20:52:34 -- common/autotest_common.sh@10 -- # set +x 00:10:07.009 ************************************ 00:10:07.009 START TEST accel_xor 00:10:07.009 ************************************ 00:10:07.009 20:52:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:07.009 20:52:34 -- accel/accel.sh@16 -- # local accel_opc 00:10:07.009 20:52:34 -- accel/accel.sh@17 -- # local accel_module 00:10:07.009 20:52:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:07.009 20:52:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:07.009 20:52:34 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.009 20:52:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:07.009 20:52:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.009 20:52:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.009 20:52:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:07.009 20:52:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:07.009 20:52:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:07.009 20:52:34 -- accel/accel.sh@42 -- # jq -r . 00:10:07.009 [2024-06-09 20:52:34.927856] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:07.009 [2024-06-09 20:52:34.928124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106569 ] 00:10:07.009 [2024-06-09 20:52:35.116975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.268 [2024-06-09 20:52:35.292947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.171 20:52:37 -- accel/accel.sh@18 -- # out=' 00:10:09.171 SPDK Configuration: 00:10:09.171 Core mask: 0x1 00:10:09.171 00:10:09.171 Accel Perf Configuration: 00:10:09.171 Workload Type: xor 00:10:09.171 Source buffers: 3 00:10:09.171 Transfer size: 4096 bytes 00:10:09.171 Vector count 1 00:10:09.171 Module: software 00:10:09.171 Queue depth: 32 00:10:09.171 Allocate depth: 32 00:10:09.171 # threads/core: 1 00:10:09.171 Run time: 1 seconds 00:10:09.171 Verify: Yes 00:10:09.171 00:10:09.171 Running for 1 seconds... 00:10:09.171 00:10:09.171 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:09.171 ------------------------------------------------------------------------------------ 00:10:09.171 0,0 237056/s 926 MiB/s 0 0 00:10:09.171 ==================================================================================== 00:10:09.171 Total 237056/s 926 MiB/s 0 0' 00:10:09.171 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.171 20:52:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:09.171 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.171 20:52:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:09.171 20:52:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.171 20:52:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.171 20:52:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.171 20:52:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.171 20:52:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.171 20:52:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.171 20:52:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.171 20:52:37 -- accel/accel.sh@42 -- # jq -r . 00:10:09.171 [2024-06-09 20:52:37.253102] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:09.172 [2024-06-09 20:52:37.253316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106609 ] 00:10:09.430 [2024-06-09 20:52:37.420011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.689 [2024-06-09 20:52:37.610177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val= 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val= 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=0x1 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val= 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val= 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=xor 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=3 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val= 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=software 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@23 -- # accel_module=software 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=32 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=32 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=1 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val=Yes 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val= 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:09.689 20:52:37 -- accel/accel.sh@21 -- # val= 00:10:09.689 20:52:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # IFS=: 00:10:09.689 20:52:37 -- accel/accel.sh@20 -- # read -r var val 00:10:11.591 20:52:39 -- accel/accel.sh@21 -- # val= 00:10:11.591 20:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.591 20:52:39 -- accel/accel.sh@20 -- # IFS=: 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # read -r var val 00:10:11.592 20:52:39 -- accel/accel.sh@21 -- # val= 00:10:11.592 20:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # IFS=: 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # read -r var val 00:10:11.592 20:52:39 -- accel/accel.sh@21 -- # val= 00:10:11.592 20:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # IFS=: 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # read -r var val 00:10:11.592 20:52:39 -- accel/accel.sh@21 -- # val= 00:10:11.592 20:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # IFS=: 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # read -r var val 00:10:11.592 20:52:39 -- accel/accel.sh@21 -- # val= 00:10:11.592 20:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # IFS=: 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # read -r var val 00:10:11.592 20:52:39 -- accel/accel.sh@21 -- # val= 00:10:11.592 20:52:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # IFS=: 00:10:11.592 20:52:39 -- accel/accel.sh@20 -- # read -r var val 00:10:11.592 20:52:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:11.592 20:52:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:11.592 20:52:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:11.592 00:10:11.592 real 0m4.664s 00:10:11.592 user 0m4.102s 00:10:11.592 sys 0m0.383s 00:10:11.592 20:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.592 20:52:39 -- common/autotest_common.sh@10 -- # set +x 00:10:11.592 ************************************ 00:10:11.592 END TEST accel_xor 00:10:11.592 ************************************ 00:10:11.592 20:52:39 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:11.592 20:52:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:11.592 20:52:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.592 20:52:39 -- common/autotest_common.sh@10 -- # set +x 00:10:11.592 ************************************ 00:10:11.592 START TEST accel_dif_verify 00:10:11.592 ************************************ 00:10:11.592 20:52:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:11.592 20:52:39 -- accel/accel.sh@16 -- # local accel_opc 00:10:11.592 20:52:39 -- accel/accel.sh@17 -- # local accel_module 00:10:11.592 20:52:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:11.592 20:52:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:11.592 20:52:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.592 20:52:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.592 20:52:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.592 20:52:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.592 20:52:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.592 20:52:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.592 20:52:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.592 20:52:39 -- accel/accel.sh@42 -- # jq -r . 00:10:11.592 [2024-06-09 20:52:39.636507] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:11.592 [2024-06-09 20:52:39.636725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106654 ] 00:10:11.851 [2024-06-09 20:52:39.805100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.851 [2024-06-09 20:52:39.984336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.754 20:52:41 -- accel/accel.sh@18 -- # out=' 00:10:13.754 SPDK Configuration: 00:10:13.754 Core mask: 0x1 00:10:13.754 00:10:13.754 Accel Perf Configuration: 00:10:13.754 Workload Type: dif_verify 00:10:13.754 Vector size: 4096 bytes 00:10:13.754 Transfer size: 4096 bytes 00:10:13.754 Block size: 512 bytes 00:10:13.754 Metadata size: 8 bytes 00:10:13.754 Vector count 1 00:10:13.754 Module: software 00:10:13.754 Queue depth: 32 00:10:13.754 Allocate depth: 32 00:10:13.754 # threads/core: 1 00:10:13.754 Run time: 1 seconds 00:10:13.754 Verify: No 00:10:13.754 00:10:13.754 Running for 1 seconds... 00:10:13.754 00:10:13.754 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:13.754 ------------------------------------------------------------------------------------ 00:10:13.754 0,0 114048/s 452 MiB/s 0 0 00:10:13.754 ==================================================================================== 00:10:13.754 Total 114048/s 445 MiB/s 0 0' 00:10:13.754 20:52:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:13.754 20:52:41 -- accel/accel.sh@20 -- # IFS=: 00:10:13.754 20:52:41 -- accel/accel.sh@20 -- # read -r var val 00:10:13.754 20:52:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:13.754 20:52:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.012 20:52:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.012 20:52:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.012 20:52:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.012 20:52:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.012 20:52:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.012 20:52:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.012 20:52:41 -- accel/accel.sh@42 -- # jq -r . 00:10:14.012 [2024-06-09 20:52:41.970529] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:14.012 [2024-06-09 20:52:41.970789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106689 ] 00:10:14.012 [2024-06-09 20:52:42.140171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.270 [2024-06-09 20:52:42.353189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val= 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val= 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val=0x1 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val= 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val= 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val=dif_verify 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val= 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val=software 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@23 -- # accel_module=software 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val=32 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val=32 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val=1 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val=No 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val= 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:14.529 20:52:42 -- accel/accel.sh@21 -- # val= 00:10:14.529 20:52:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # IFS=: 00:10:14.529 20:52:42 -- accel/accel.sh@20 -- # read -r var val 00:10:16.432 20:52:44 -- accel/accel.sh@21 -- # val= 00:10:16.432 20:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # IFS=: 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # read -r var val 00:10:16.432 20:52:44 -- accel/accel.sh@21 -- # val= 00:10:16.432 20:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # IFS=: 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # read -r var val 00:10:16.432 20:52:44 -- accel/accel.sh@21 -- # val= 00:10:16.432 20:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # IFS=: 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # read -r var val 00:10:16.432 20:52:44 -- accel/accel.sh@21 -- # val= 00:10:16.432 20:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # IFS=: 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # read -r var val 00:10:16.432 20:52:44 -- accel/accel.sh@21 -- # val= 00:10:16.432 20:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # IFS=: 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # read -r var val 00:10:16.432 20:52:44 -- accel/accel.sh@21 -- # val= 00:10:16.432 20:52:44 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # IFS=: 00:10:16.432 20:52:44 -- accel/accel.sh@20 -- # read -r var val 00:10:16.432 ************************************ 00:10:16.432 END TEST accel_dif_verify 00:10:16.432 ************************************ 00:10:16.432 20:52:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:16.432 20:52:44 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:16.432 20:52:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.432 00:10:16.432 real 0m4.738s 00:10:16.432 user 0m4.197s 00:10:16.432 sys 0m0.378s 00:10:16.432 20:52:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.432 20:52:44 -- common/autotest_common.sh@10 -- # set +x 00:10:16.433 20:52:44 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:16.433 20:52:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:16.433 20:52:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.433 20:52:44 -- common/autotest_common.sh@10 -- # set +x 00:10:16.433 ************************************ 00:10:16.433 START TEST accel_dif_generate 00:10:16.433 ************************************ 00:10:16.433 20:52:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:10:16.433 20:52:44 -- accel/accel.sh@16 -- # local accel_opc 00:10:16.433 20:52:44 -- accel/accel.sh@17 -- # local accel_module 00:10:16.433 20:52:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:16.433 20:52:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:16.433 20:52:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.433 20:52:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.433 20:52:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.433 20:52:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.433 20:52:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.433 20:52:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.433 20:52:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.433 20:52:44 -- accel/accel.sh@42 -- # jq -r . 00:10:16.433 [2024-06-09 20:52:44.423270] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:16.433 [2024-06-09 20:52:44.423579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106743 ] 00:10:16.433 [2024-06-09 20:52:44.578831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.691 [2024-06-09 20:52:44.766395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.594 20:52:46 -- accel/accel.sh@18 -- # out=' 00:10:18.594 SPDK Configuration: 00:10:18.594 Core mask: 0x1 00:10:18.594 00:10:18.594 Accel Perf Configuration: 00:10:18.594 Workload Type: dif_generate 00:10:18.594 Vector size: 4096 bytes 00:10:18.594 Transfer size: 4096 bytes 00:10:18.594 Block size: 512 bytes 00:10:18.594 Metadata size: 8 bytes 00:10:18.594 Vector count 1 00:10:18.594 Module: software 00:10:18.594 Queue depth: 32 00:10:18.594 Allocate depth: 32 00:10:18.594 # threads/core: 1 00:10:18.594 Run time: 1 seconds 00:10:18.594 Verify: No 00:10:18.594 00:10:18.594 Running for 1 seconds... 00:10:18.594 00:10:18.594 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.594 ------------------------------------------------------------------------------------ 00:10:18.594 0,0 134528/s 533 MiB/s 0 0 00:10:18.594 ==================================================================================== 00:10:18.594 Total 134528/s 525 MiB/s 0 0' 00:10:18.594 20:52:46 -- accel/accel.sh@20 -- # IFS=: 00:10:18.594 20:52:46 -- accel/accel.sh@20 -- # read -r var val 00:10:18.594 20:52:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:18.594 20:52:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:18.594 20:52:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.594 20:52:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.594 20:52:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.594 20:52:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.594 20:52:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.594 20:52:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.594 20:52:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.594 20:52:46 -- accel/accel.sh@42 -- # jq -r . 00:10:18.594 [2024-06-09 20:52:46.747576] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:18.594 [2024-06-09 20:52:46.748003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106778 ] 00:10:18.852 [2024-06-09 20:52:46.917233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.110 [2024-06-09 20:52:47.106422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val= 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val= 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val=0x1 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val= 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val= 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val=dif_generate 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.369 20:52:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:19.369 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.369 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val= 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val=software 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@23 -- # accel_module=software 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val=32 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val=32 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val=1 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val=No 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val= 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:19.370 20:52:47 -- accel/accel.sh@21 -- # val= 00:10:19.370 20:52:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # IFS=: 00:10:19.370 20:52:47 -- accel/accel.sh@20 -- # read -r var val 00:10:21.281 20:52:49 -- accel/accel.sh@21 -- # val= 00:10:21.281 20:52:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # IFS=: 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # read -r var val 00:10:21.281 20:52:49 -- accel/accel.sh@21 -- # val= 00:10:21.281 20:52:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # IFS=: 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # read -r var val 00:10:21.281 20:52:49 -- accel/accel.sh@21 -- # val= 00:10:21.281 20:52:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # IFS=: 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # read -r var val 00:10:21.281 20:52:49 -- accel/accel.sh@21 -- # val= 00:10:21.281 20:52:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # IFS=: 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # read -r var val 00:10:21.281 20:52:49 -- accel/accel.sh@21 -- # val= 00:10:21.281 20:52:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # IFS=: 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # read -r var val 00:10:21.281 20:52:49 -- accel/accel.sh@21 -- # val= 00:10:21.281 20:52:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # IFS=: 00:10:21.281 20:52:49 -- accel/accel.sh@20 -- # read -r var val 00:10:21.281 ************************************ 00:10:21.281 END TEST accel_dif_generate 00:10:21.281 ************************************ 00:10:21.281 20:52:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:21.281 20:52:49 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:21.281 20:52:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:21.281 00:10:21.281 real 0m4.708s 00:10:21.281 user 0m4.165s 00:10:21.281 sys 0m0.375s 00:10:21.281 20:52:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.281 20:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:21.281 20:52:49 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:21.281 20:52:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:21.281 20:52:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:21.281 20:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:21.281 ************************************ 00:10:21.281 START TEST accel_dif_generate_copy 00:10:21.281 ************************************ 00:10:21.281 20:52:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:21.281 20:52:49 -- accel/accel.sh@16 -- # local accel_opc 00:10:21.281 20:52:49 -- accel/accel.sh@17 -- # local accel_module 00:10:21.281 20:52:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:21.281 20:52:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:21.281 20:52:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.281 20:52:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.281 20:52:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.281 20:52:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.281 20:52:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.281 20:52:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.281 20:52:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.281 20:52:49 -- accel/accel.sh@42 -- # jq -r . 00:10:21.281 [2024-06-09 20:52:49.182699] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:21.281 [2024-06-09 20:52:49.183035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106828 ] 00:10:21.281 [2024-06-09 20:52:49.334558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.566 [2024-06-09 20:52:49.527465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.480 20:52:51 -- accel/accel.sh@18 -- # out=' 00:10:23.480 SPDK Configuration: 00:10:23.480 Core mask: 0x1 00:10:23.480 00:10:23.480 Accel Perf Configuration: 00:10:23.480 Workload Type: dif_generate_copy 00:10:23.480 Vector size: 4096 bytes 00:10:23.480 Transfer size: 4096 bytes 00:10:23.480 Vector count 1 00:10:23.480 Module: software 00:10:23.480 Queue depth: 32 00:10:23.480 Allocate depth: 32 00:10:23.480 # threads/core: 1 00:10:23.480 Run time: 1 seconds 00:10:23.480 Verify: No 00:10:23.480 00:10:23.480 Running for 1 seconds... 00:10:23.480 00:10:23.480 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:23.480 ------------------------------------------------------------------------------------ 00:10:23.480 0,0 94752/s 375 MiB/s 0 0 00:10:23.480 ==================================================================================== 00:10:23.480 Total 94752/s 370 MiB/s 0 0' 00:10:23.480 20:52:51 -- accel/accel.sh@20 -- # IFS=: 00:10:23.480 20:52:51 -- accel/accel.sh@20 -- # read -r var val 00:10:23.480 20:52:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:23.480 20:52:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:23.480 20:52:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.480 20:52:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.480 20:52:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.480 20:52:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.480 20:52:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.480 20:52:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.480 20:52:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.480 20:52:51 -- accel/accel.sh@42 -- # jq -r . 00:10:23.481 [2024-06-09 20:52:51.564821] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:23.481 [2024-06-09 20:52:51.565343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106870 ] 00:10:23.739 [2024-06-09 20:52:51.733021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.999 [2024-06-09 20:52:51.934442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val= 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val= 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val=0x1 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val= 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val= 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val= 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val=software 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@23 -- # accel_module=software 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val=32 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val=32 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val=1 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val=No 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val= 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:23.999 20:52:52 -- accel/accel.sh@21 -- # val= 00:10:23.999 20:52:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # IFS=: 00:10:23.999 20:52:52 -- accel/accel.sh@20 -- # read -r var val 00:10:25.901 20:52:53 -- accel/accel.sh@21 -- # val= 00:10:25.901 20:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # IFS=: 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # read -r var val 00:10:25.901 20:52:53 -- accel/accel.sh@21 -- # val= 00:10:25.901 20:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # IFS=: 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # read -r var val 00:10:25.901 20:52:53 -- accel/accel.sh@21 -- # val= 00:10:25.901 20:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # IFS=: 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # read -r var val 00:10:25.901 20:52:53 -- accel/accel.sh@21 -- # val= 00:10:25.901 20:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # IFS=: 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # read -r var val 00:10:25.901 20:52:53 -- accel/accel.sh@21 -- # val= 00:10:25.901 20:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # IFS=: 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # read -r var val 00:10:25.901 20:52:53 -- accel/accel.sh@21 -- # val= 00:10:25.901 20:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # IFS=: 00:10:25.901 20:52:53 -- accel/accel.sh@20 -- # read -r var val 00:10:25.901 20:52:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:25.901 20:52:53 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:25.901 20:52:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.901 00:10:25.901 real 0m4.748s 00:10:25.901 user 0m4.239s 00:10:25.901 sys 0m0.342s 00:10:25.901 20:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.901 20:52:53 -- common/autotest_common.sh@10 -- # set +x 00:10:25.901 ************************************ 00:10:25.901 END TEST accel_dif_generate_copy 00:10:25.901 ************************************ 00:10:25.901 20:52:53 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:25.901 20:52:53 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:25.901 20:52:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:25.901 20:52:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.901 20:52:53 -- common/autotest_common.sh@10 -- # set +x 00:10:25.901 ************************************ 00:10:25.901 START TEST accel_comp 00:10:25.901 ************************************ 00:10:25.901 20:52:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:25.901 20:52:53 -- accel/accel.sh@16 -- # local accel_opc 00:10:25.901 20:52:53 -- accel/accel.sh@17 -- # local accel_module 00:10:25.901 20:52:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:25.901 20:52:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:25.901 20:52:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.901 20:52:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.901 20:52:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.901 20:52:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.901 20:52:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.901 20:52:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.901 20:52:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.901 20:52:53 -- accel/accel.sh@42 -- # jq -r . 00:10:25.901 [2024-06-09 20:52:53.990757] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:25.901 [2024-06-09 20:52:53.991166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106915 ] 00:10:26.160 [2024-06-09 20:52:54.153962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.419 [2024-06-09 20:52:54.345627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.324 20:52:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:28.324 00:10:28.324 SPDK Configuration: 00:10:28.324 Core mask: 0x1 00:10:28.324 00:10:28.324 Accel Perf Configuration: 00:10:28.324 Workload Type: compress 00:10:28.324 Transfer size: 4096 bytes 00:10:28.324 Vector count 1 00:10:28.324 Module: software 00:10:28.324 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:28.324 Queue depth: 32 00:10:28.324 Allocate depth: 32 00:10:28.324 # threads/core: 1 00:10:28.324 Run time: 1 seconds 00:10:28.324 Verify: No 00:10:28.324 00:10:28.324 Running for 1 seconds... 00:10:28.324 00:10:28.324 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:28.324 ------------------------------------------------------------------------------------ 00:10:28.324 0,0 51648/s 215 MiB/s 0 0 00:10:28.324 ==================================================================================== 00:10:28.324 Total 51648/s 201 MiB/s 0 0' 00:10:28.324 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.324 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.324 20:52:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:28.324 20:52:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:28.324 20:52:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.324 20:52:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.324 20:52:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.324 20:52:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.324 20:52:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.324 20:52:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.324 20:52:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.324 20:52:56 -- accel/accel.sh@42 -- # jq -r . 00:10:28.324 [2024-06-09 20:52:56.300468] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:28.324 [2024-06-09 20:52:56.300849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106953 ] 00:10:28.324 [2024-06-09 20:52:56.461730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.583 [2024-06-09 20:52:56.659581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=0x1 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=compress 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=software 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@23 -- # accel_module=software 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=32 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=32 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=1 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val=No 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.843 20:52:56 -- accel/accel.sh@21 -- # val= 00:10:28.843 20:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # IFS=: 00:10:28.843 20:52:56 -- accel/accel.sh@20 -- # read -r var val 00:10:30.747 20:52:58 -- accel/accel.sh@21 -- # val= 00:10:30.747 20:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # IFS=: 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # read -r var val 00:10:30.747 20:52:58 -- accel/accel.sh@21 -- # val= 00:10:30.747 20:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # IFS=: 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # read -r var val 00:10:30.747 20:52:58 -- accel/accel.sh@21 -- # val= 00:10:30.747 20:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # IFS=: 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # read -r var val 00:10:30.747 20:52:58 -- accel/accel.sh@21 -- # val= 00:10:30.747 20:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # IFS=: 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # read -r var val 00:10:30.747 20:52:58 -- accel/accel.sh@21 -- # val= 00:10:30.747 20:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # IFS=: 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # read -r var val 00:10:30.747 20:52:58 -- accel/accel.sh@21 -- # val= 00:10:30.747 20:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # IFS=: 00:10:30.747 20:52:58 -- accel/accel.sh@20 -- # read -r var val 00:10:30.747 ************************************ 00:10:30.747 END TEST accel_comp 00:10:30.747 ************************************ 00:10:30.747 20:52:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:30.747 20:52:58 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:30.747 20:52:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.747 00:10:30.747 real 0m4.598s 00:10:30.747 user 0m4.046s 00:10:30.747 sys 0m0.385s 00:10:30.747 20:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.747 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:10:30.747 20:52:58 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.747 20:52:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:30.747 20:52:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.747 20:52:58 -- common/autotest_common.sh@10 -- # set +x 00:10:30.747 ************************************ 00:10:30.747 START TEST accel_decomp 00:10:30.747 ************************************ 00:10:30.747 20:52:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.747 20:52:58 -- accel/accel.sh@16 -- # local accel_opc 00:10:30.747 20:52:58 -- accel/accel.sh@17 -- # local accel_module 00:10:30.747 20:52:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.747 20:52:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.747 20:52:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.747 20:52:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.747 20:52:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.747 20:52:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.747 20:52:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.747 20:52:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.747 20:52:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.747 20:52:58 -- accel/accel.sh@42 -- # jq -r . 00:10:30.747 [2024-06-09 20:52:58.637988] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:30.747 [2024-06-09 20:52:58.638375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107004 ] 00:10:30.747 [2024-06-09 20:52:58.788782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.006 [2024-06-09 20:52:58.977358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.957 20:53:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:32.957 00:10:32.957 SPDK Configuration: 00:10:32.957 Core mask: 0x1 00:10:32.957 00:10:32.957 Accel Perf Configuration: 00:10:32.957 Workload Type: decompress 00:10:32.957 Transfer size: 4096 bytes 00:10:32.957 Vector count 1 00:10:32.957 Module: software 00:10:32.957 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:32.957 Queue depth: 32 00:10:32.957 Allocate depth: 32 00:10:32.957 # threads/core: 1 00:10:32.957 Run time: 1 seconds 00:10:32.957 Verify: Yes 00:10:32.957 00:10:32.957 Running for 1 seconds... 00:10:32.957 00:10:32.957 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:32.957 ------------------------------------------------------------------------------------ 00:10:32.957 0,0 71200/s 131 MiB/s 0 0 00:10:32.957 ==================================================================================== 00:10:32.957 Total 71200/s 278 MiB/s 0 0' 00:10:32.957 20:53:00 -- accel/accel.sh@20 -- # IFS=: 00:10:32.957 20:53:00 -- accel/accel.sh@20 -- # read -r var val 00:10:32.958 20:53:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:32.958 20:53:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.958 20:53:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:32.958 20:53:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.958 20:53:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.958 20:53:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.958 20:53:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.958 20:53:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.958 20:53:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.958 20:53:00 -- accel/accel.sh@42 -- # jq -r . 00:10:32.958 [2024-06-09 20:53:00.925824] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:32.958 [2024-06-09 20:53:00.926368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107039 ] 00:10:32.958 [2024-06-09 20:53:01.093154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.216 [2024-06-09 20:53:01.287824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=0x1 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=decompress 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=software 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@23 -- # accel_module=software 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=32 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=32 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=1 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val=Yes 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:33.476 20:53:01 -- accel/accel.sh@21 -- # val= 00:10:33.476 20:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # IFS=: 00:10:33.476 20:53:01 -- accel/accel.sh@20 -- # read -r var val 00:10:35.382 20:53:03 -- accel/accel.sh@21 -- # val= 00:10:35.382 20:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # IFS=: 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # read -r var val 00:10:35.382 20:53:03 -- accel/accel.sh@21 -- # val= 00:10:35.382 20:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # IFS=: 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # read -r var val 00:10:35.382 20:53:03 -- accel/accel.sh@21 -- # val= 00:10:35.382 20:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # IFS=: 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # read -r var val 00:10:35.382 20:53:03 -- accel/accel.sh@21 -- # val= 00:10:35.382 20:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # IFS=: 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # read -r var val 00:10:35.382 20:53:03 -- accel/accel.sh@21 -- # val= 00:10:35.382 20:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # IFS=: 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # read -r var val 00:10:35.382 20:53:03 -- accel/accel.sh@21 -- # val= 00:10:35.382 20:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # IFS=: 00:10:35.382 20:53:03 -- accel/accel.sh@20 -- # read -r var val 00:10:35.382 20:53:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:35.382 20:53:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:35.382 20:53:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:35.382 00:10:35.382 real 0m4.614s 00:10:35.382 user 0m4.063s 00:10:35.382 sys 0m0.383s 00:10:35.382 ************************************ 00:10:35.382 END TEST accel_decomp 00:10:35.382 ************************************ 00:10:35.382 20:53:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.382 20:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:35.382 20:53:03 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:35.382 20:53:03 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:35.382 20:53:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.382 20:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:35.382 ************************************ 00:10:35.382 START TEST accel_decmop_full 00:10:35.382 ************************************ 00:10:35.382 20:53:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:35.382 20:53:03 -- accel/accel.sh@16 -- # local accel_opc 00:10:35.382 20:53:03 -- accel/accel.sh@17 -- # local accel_module 00:10:35.382 20:53:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:35.382 20:53:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:35.382 20:53:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.382 20:53:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.382 20:53:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.382 20:53:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.382 20:53:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.382 20:53:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.382 20:53:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.382 20:53:03 -- accel/accel.sh@42 -- # jq -r . 00:10:35.382 [2024-06-09 20:53:03.309301] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:35.382 [2024-06-09 20:53:03.310307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107089 ] 00:10:35.382 [2024-06-09 20:53:03.476561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.641 [2024-06-09 20:53:03.658697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.546 20:53:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:37.546 00:10:37.546 SPDK Configuration: 00:10:37.546 Core mask: 0x1 00:10:37.546 00:10:37.546 Accel Perf Configuration: 00:10:37.546 Workload Type: decompress 00:10:37.546 Transfer size: 111250 bytes 00:10:37.546 Vector count 1 00:10:37.546 Module: software 00:10:37.546 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:37.546 Queue depth: 32 00:10:37.546 Allocate depth: 32 00:10:37.546 # threads/core: 1 00:10:37.546 Run time: 1 seconds 00:10:37.546 Verify: Yes 00:10:37.546 00:10:37.546 Running for 1 seconds... 00:10:37.546 00:10:37.546 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.546 ------------------------------------------------------------------------------------ 00:10:37.546 0,0 5280/s 218 MiB/s 0 0 00:10:37.546 ==================================================================================== 00:10:37.546 Total 5280/s 560 MiB/s 0 0' 00:10:37.546 20:53:05 -- accel/accel.sh@20 -- # IFS=: 00:10:37.546 20:53:05 -- accel/accel.sh@20 -- # read -r var val 00:10:37.546 20:53:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:37.546 20:53:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:37.546 20:53:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.546 20:53:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.546 20:53:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.546 20:53:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.546 20:53:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.546 20:53:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.546 20:53:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.546 20:53:05 -- accel/accel.sh@42 -- # jq -r . 00:10:37.546 [2024-06-09 20:53:05.632001] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:37.546 [2024-06-09 20:53:05.632477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107126 ] 00:10:37.804 [2024-06-09 20:53:05.804227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.063 [2024-06-09 20:53:05.992521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.063 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.063 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.063 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.063 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.063 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.063 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.063 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.063 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.063 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.063 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.063 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.063 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.063 20:53:06 -- accel/accel.sh@21 -- # val=0x1 00:10:38.063 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.063 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val=decompress 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val=software 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@23 -- # accel_module=software 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val=32 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val=32 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val=1 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val=Yes 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:38.064 20:53:06 -- accel/accel.sh@21 -- # val= 00:10:38.064 20:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # IFS=: 00:10:38.064 20:53:06 -- accel/accel.sh@20 -- # read -r var val 00:10:39.969 20:53:07 -- accel/accel.sh@21 -- # val= 00:10:39.969 20:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.969 20:53:07 -- accel/accel.sh@20 -- # IFS=: 00:10:39.969 20:53:07 -- accel/accel.sh@20 -- # read -r var val 00:10:39.969 20:53:07 -- accel/accel.sh@21 -- # val= 00:10:39.969 20:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.969 20:53:07 -- accel/accel.sh@20 -- # IFS=: 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # read -r var val 00:10:39.970 20:53:07 -- accel/accel.sh@21 -- # val= 00:10:39.970 20:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # IFS=: 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # read -r var val 00:10:39.970 20:53:07 -- accel/accel.sh@21 -- # val= 00:10:39.970 20:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # IFS=: 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # read -r var val 00:10:39.970 20:53:07 -- accel/accel.sh@21 -- # val= 00:10:39.970 20:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # IFS=: 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # read -r var val 00:10:39.970 20:53:07 -- accel/accel.sh@21 -- # val= 00:10:39.970 20:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # IFS=: 00:10:39.970 20:53:07 -- accel/accel.sh@20 -- # read -r var val 00:10:39.970 20:53:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:39.970 20:53:07 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:39.970 20:53:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.970 00:10:39.970 real 0m4.665s 00:10:39.970 user 0m4.133s 00:10:39.970 sys 0m0.361s 00:10:39.970 20:53:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.970 20:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:39.970 ************************************ 00:10:39.970 END TEST accel_decmop_full 00:10:39.970 ************************************ 00:10:39.970 20:53:07 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:39.970 20:53:07 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:39.970 20:53:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:39.970 20:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:39.970 ************************************ 00:10:39.970 START TEST accel_decomp_mcore 00:10:39.970 ************************************ 00:10:39.970 20:53:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:39.970 20:53:07 -- accel/accel.sh@16 -- # local accel_opc 00:10:39.970 20:53:07 -- accel/accel.sh@17 -- # local accel_module 00:10:39.970 20:53:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:39.970 20:53:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:39.970 20:53:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.970 20:53:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.970 20:53:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.970 20:53:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.970 20:53:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.970 20:53:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.970 20:53:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.970 20:53:07 -- accel/accel.sh@42 -- # jq -r . 00:10:39.970 [2024-06-09 20:53:08.020486] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:39.970 [2024-06-09 20:53:08.020824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107178 ] 00:10:40.229 [2024-06-09 20:53:08.195803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.229 [2024-06-09 20:53:08.370108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.229 [2024-06-09 20:53:08.370284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.229 [2024-06-09 20:53:08.370435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.229 [2024-06-09 20:53:08.370646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.763 20:53:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:42.763 00:10:42.763 SPDK Configuration: 00:10:42.763 Core mask: 0xf 00:10:42.763 00:10:42.763 Accel Perf Configuration: 00:10:42.763 Workload Type: decompress 00:10:42.763 Transfer size: 4096 bytes 00:10:42.763 Vector count 1 00:10:42.763 Module: software 00:10:42.763 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:42.763 Queue depth: 32 00:10:42.763 Allocate depth: 32 00:10:42.763 # threads/core: 1 00:10:42.763 Run time: 1 seconds 00:10:42.763 Verify: Yes 00:10:42.763 00:10:42.763 Running for 1 seconds... 00:10:42.763 00:10:42.763 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:42.763 ------------------------------------------------------------------------------------ 00:10:42.763 0,0 49440/s 91 MiB/s 0 0 00:10:42.763 3,0 51488/s 94 MiB/s 0 0 00:10:42.763 2,0 52032/s 95 MiB/s 0 0 00:10:42.763 1,0 47936/s 88 MiB/s 0 0 00:10:42.763 ==================================================================================== 00:10:42.763 Total 200896/s 784 MiB/s 0 0' 00:10:42.763 20:53:10 -- accel/accel.sh@20 -- # IFS=: 00:10:42.763 20:53:10 -- accel/accel.sh@20 -- # read -r var val 00:10:42.763 20:53:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:42.763 20:53:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:42.763 20:53:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.763 20:53:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.763 20:53:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.763 20:53:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.763 20:53:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.763 20:53:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.763 20:53:10 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.764 20:53:10 -- accel/accel.sh@42 -- # jq -r . 00:10:42.764 [2024-06-09 20:53:10.484238] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:42.764 [2024-06-09 20:53:10.484707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107216 ] 00:10:42.764 [2024-06-09 20:53:10.669014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.764 [2024-06-09 20:53:10.884149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.764 [2024-06-09 20:53:10.884301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.764 [2024-06-09 20:53:10.884436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.764 [2024-06-09 20:53:10.884769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=0xf 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=decompress 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=software 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=32 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=32 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=1 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val=Yes 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:43.023 20:53:11 -- accel/accel.sh@21 -- # val= 00:10:43.023 20:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # IFS=: 00:10:43.023 20:53:11 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 20:53:12 -- accel/accel.sh@21 -- # val= 00:10:44.925 20:53:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # IFS=: 00:10:44.925 20:53:12 -- accel/accel.sh@20 -- # read -r var val 00:10:44.925 ************************************ 00:10:44.925 END TEST accel_decomp_mcore 00:10:44.925 ************************************ 00:10:44.925 20:53:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:44.925 20:53:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:44.925 20:53:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.925 00:10:44.925 real 0m4.973s 00:10:44.925 user 0m14.420s 00:10:44.925 sys 0m0.488s 00:10:44.925 20:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.925 20:53:12 -- common/autotest_common.sh@10 -- # set +x 00:10:44.925 20:53:12 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.925 20:53:12 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:44.925 20:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.925 20:53:12 -- common/autotest_common.sh@10 -- # set +x 00:10:44.925 ************************************ 00:10:44.925 START TEST accel_decomp_full_mcore 00:10:44.925 ************************************ 00:10:44.925 20:53:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.925 20:53:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:44.925 20:53:13 -- accel/accel.sh@17 -- # local accel_module 00:10:44.925 20:53:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.925 20:53:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:44.925 20:53:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.925 20:53:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.925 20:53:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.925 20:53:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.925 20:53:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.925 20:53:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.925 20:53:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.925 20:53:13 -- accel/accel.sh@42 -- # jq -r . 00:10:44.925 [2024-06-09 20:53:13.035503] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:44.925 [2024-06-09 20:53:13.035664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107277 ] 00:10:45.183 [2024-06-09 20:53:13.209704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.442 [2024-06-09 20:53:13.417221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.442 [2024-06-09 20:53:13.417381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.442 [2024-06-09 20:53:13.418538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.442 [2024-06-09 20:53:13.418583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.975 20:53:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:47.975 00:10:47.975 SPDK Configuration: 00:10:47.975 Core mask: 0xf 00:10:47.975 00:10:47.975 Accel Perf Configuration: 00:10:47.975 Workload Type: decompress 00:10:47.975 Transfer size: 111250 bytes 00:10:47.975 Vector count 1 00:10:47.975 Module: software 00:10:47.975 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.975 Queue depth: 32 00:10:47.975 Allocate depth: 32 00:10:47.975 # threads/core: 1 00:10:47.975 Run time: 1 seconds 00:10:47.975 Verify: Yes 00:10:47.975 00:10:47.975 Running for 1 seconds... 00:10:47.975 00:10:47.975 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:47.975 ------------------------------------------------------------------------------------ 00:10:47.975 0,0 5152/s 212 MiB/s 0 0 00:10:47.975 3,0 4960/s 204 MiB/s 0 0 00:10:47.975 2,0 4832/s 199 MiB/s 0 0 00:10:47.975 1,0 4864/s 200 MiB/s 0 0 00:10:47.975 ==================================================================================== 00:10:47.975 Total 19808/s 2101 MiB/s 0 0' 00:10:47.975 20:53:15 -- accel/accel.sh@20 -- # IFS=: 00:10:47.975 20:53:15 -- accel/accel.sh@20 -- # read -r var val 00:10:47.975 20:53:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:47.975 20:53:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:47.975 20:53:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.975 20:53:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.975 20:53:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.975 20:53:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.975 20:53:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.975 20:53:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.975 20:53:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.975 20:53:15 -- accel/accel.sh@42 -- # jq -r . 00:10:47.975 [2024-06-09 20:53:15.601434] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:47.975 [2024-06-09 20:53:15.601672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107316 ] 00:10:47.975 [2024-06-09 20:53:15.791331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.975 [2024-06-09 20:53:15.992239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.975 [2024-06-09 20:53:15.992406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.975 [2024-06-09 20:53:15.993244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.975 [2024-06-09 20:53:15.993291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=0xf 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=decompress 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=software 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@23 -- # accel_module=software 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=32 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=32 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=1 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val=Yes 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:48.234 20:53:16 -- accel/accel.sh@21 -- # val= 00:10:48.234 20:53:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # IFS=: 00:10:48.234 20:53:16 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:17 -- accel/accel.sh@21 -- # val= 00:10:50.138 20:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.138 20:53:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.138 20:53:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:50.138 20:53:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:50.138 20:53:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.138 00:10:50.138 real 0m5.008s 00:10:50.138 user 0m14.517s 00:10:50.138 sys 0m0.439s 00:10:50.138 20:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.138 ************************************ 00:10:50.138 END TEST accel_decomp_full_mcore 00:10:50.138 ************************************ 00:10:50.138 20:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:50.138 20:53:18 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:50.138 20:53:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:10:50.138 20:53:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:50.138 20:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:50.138 ************************************ 00:10:50.138 START TEST accel_decomp_mthread 00:10:50.138 ************************************ 00:10:50.138 20:53:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:50.138 20:53:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:50.138 20:53:18 -- accel/accel.sh@17 -- # local accel_module 00:10:50.138 20:53:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:50.138 20:53:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:50.138 20:53:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.138 20:53:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.138 20:53:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.138 20:53:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.138 20:53:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.138 20:53:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.138 20:53:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.138 20:53:18 -- accel/accel.sh@42 -- # jq -r . 00:10:50.138 [2024-06-09 20:53:18.101181] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:50.138 [2024-06-09 20:53:18.101316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107371 ] 00:10:50.138 [2024-06-09 20:53:18.257213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.397 [2024-06-09 20:53:18.432706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.334 20:53:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:52.334 00:10:52.334 SPDK Configuration: 00:10:52.334 Core mask: 0x1 00:10:52.334 00:10:52.334 Accel Perf Configuration: 00:10:52.334 Workload Type: decompress 00:10:52.334 Transfer size: 4096 bytes 00:10:52.334 Vector count 1 00:10:52.334 Module: software 00:10:52.334 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:52.334 Queue depth: 32 00:10:52.334 Allocate depth: 32 00:10:52.334 # threads/core: 2 00:10:52.334 Run time: 1 seconds 00:10:52.334 Verify: Yes 00:10:52.334 00:10:52.334 Running for 1 seconds... 00:10:52.334 00:10:52.334 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:52.334 ------------------------------------------------------------------------------------ 00:10:52.334 0,1 36704/s 67 MiB/s 0 0 00:10:52.334 0,0 36576/s 67 MiB/s 0 0 00:10:52.334 ==================================================================================== 00:10:52.334 Total 73280/s 286 MiB/s 0 0' 00:10:52.334 20:53:20 -- accel/accel.sh@20 -- # IFS=: 00:10:52.334 20:53:20 -- accel/accel.sh@20 -- # read -r var val 00:10:52.334 20:53:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:52.334 20:53:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:52.334 20:53:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.334 20:53:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.334 20:53:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.334 20:53:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.334 20:53:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.334 20:53:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.334 20:53:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.334 20:53:20 -- accel/accel.sh@42 -- # jq -r . 00:10:52.334 [2024-06-09 20:53:20.439340] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:52.334 [2024-06-09 20:53:20.439559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107406 ] 00:10:52.593 [2024-06-09 20:53:20.606600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.852 [2024-06-09 20:53:20.810147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=0x1 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=decompress 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=software 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@23 -- # accel_module=software 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=32 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=32 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=2 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val=Yes 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:52.852 20:53:21 -- accel/accel.sh@21 -- # val= 00:10:52.852 20:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # IFS=: 00:10:52.852 20:53:21 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@21 -- # val= 00:10:54.754 20:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@21 -- # val= 00:10:54.754 20:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@21 -- # val= 00:10:54.754 20:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@21 -- # val= 00:10:54.754 20:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@21 -- # val= 00:10:54.754 20:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@21 -- # val= 00:10:54.754 20:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@21 -- # val= 00:10:54.754 20:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # IFS=: 00:10:54.754 20:53:22 -- accel/accel.sh@20 -- # read -r var val 00:10:54.754 20:53:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:54.754 20:53:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:54.754 20:53:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:54.754 00:10:54.754 real 0m4.662s 00:10:54.754 user 0m4.125s 00:10:54.754 sys 0m0.379s 00:10:54.755 ************************************ 00:10:54.755 END TEST accel_decomp_mthread 00:10:54.755 ************************************ 00:10:54.755 20:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.755 20:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:54.755 20:53:22 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:54.755 20:53:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:54.755 20:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:54.755 20:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:54.755 ************************************ 00:10:54.755 START TEST accel_deomp_full_mthread 00:10:54.755 ************************************ 00:10:54.755 20:53:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:54.755 20:53:22 -- accel/accel.sh@16 -- # local accel_opc 00:10:54.755 20:53:22 -- accel/accel.sh@17 -- # local accel_module 00:10:54.755 20:53:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:54.755 20:53:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:54.755 20:53:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.755 20:53:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.755 20:53:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.755 20:53:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.755 20:53:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.755 20:53:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.755 20:53:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.755 20:53:22 -- accel/accel.sh@42 -- # jq -r . 00:10:54.755 [2024-06-09 20:53:22.819282] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:54.755 [2024-06-09 20:53:22.819993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107457 ] 00:10:55.012 [2024-06-09 20:53:22.987423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.012 [2024-06-09 20:53:23.175579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.542 20:53:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:57.542 00:10:57.542 SPDK Configuration: 00:10:57.542 Core mask: 0x1 00:10:57.542 00:10:57.542 Accel Perf Configuration: 00:10:57.542 Workload Type: decompress 00:10:57.542 Transfer size: 111250 bytes 00:10:57.542 Vector count 1 00:10:57.542 Module: software 00:10:57.542 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.542 Queue depth: 32 00:10:57.542 Allocate depth: 32 00:10:57.542 # threads/core: 2 00:10:57.542 Run time: 1 seconds 00:10:57.542 Verify: Yes 00:10:57.542 00:10:57.542 Running for 1 seconds... 00:10:57.542 00:10:57.542 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:57.542 ------------------------------------------------------------------------------------ 00:10:57.542 0,1 2752/s 113 MiB/s 0 0 00:10:57.542 0,0 2688/s 111 MiB/s 0 0 00:10:57.542 ==================================================================================== 00:10:57.542 Total 5440/s 577 MiB/s 0 0' 00:10:57.542 20:53:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:57.542 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.542 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.542 20:53:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:57.542 20:53:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.542 20:53:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.542 20:53:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.542 20:53:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.542 20:53:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.542 20:53:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.542 20:53:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.542 20:53:25 -- accel/accel.sh@42 -- # jq -r . 00:10:57.542 [2024-06-09 20:53:25.183126] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:57.542 [2024-06-09 20:53:25.183811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107500 ] 00:10:57.542 [2024-06-09 20:53:25.351917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.542 [2024-06-09 20:53:25.539871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=0x1 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=decompress 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=software 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@23 -- # accel_module=software 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=32 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=32 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=2 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:57.801 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.801 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.801 20:53:25 -- accel/accel.sh@21 -- # val=Yes 00:10:57.802 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.802 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.802 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.802 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.802 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.802 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.802 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:57.802 20:53:25 -- accel/accel.sh@21 -- # val= 00:10:57.802 20:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.802 20:53:25 -- accel/accel.sh@20 -- # IFS=: 00:10:57.802 20:53:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@21 -- # val= 00:10:59.707 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@21 -- # val= 00:10:59.707 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@21 -- # val= 00:10:59.707 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@21 -- # val= 00:10:59.707 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@21 -- # val= 00:10:59.707 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@21 -- # val= 00:10:59.707 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@21 -- # val= 00:10:59.707 20:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # IFS=: 00:10:59.707 20:53:27 -- accel/accel.sh@20 -- # read -r var val 00:10:59.707 20:53:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:59.707 20:53:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:59.707 20:53:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:59.707 00:10:59.707 real 0m4.727s 00:10:59.707 user 0m4.229s 00:10:59.707 sys 0m0.338s 00:10:59.707 20:53:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.707 20:53:27 -- common/autotest_common.sh@10 -- # set +x 00:10:59.707 ************************************ 00:10:59.707 END TEST accel_deomp_full_mthread 00:10:59.707 ************************************ 00:10:59.707 20:53:27 -- accel/accel.sh@116 -- # [[ n == y ]] 00:10:59.707 20:53:27 -- accel/accel.sh@129 -- # build_accel_config 00:10:59.707 20:53:27 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:59.707 20:53:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.707 20:53:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:59.707 20:53:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:59.707 20:53:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.707 20:53:27 -- common/autotest_common.sh@10 -- # set +x 00:10:59.707 20:53:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.707 20:53:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.707 20:53:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.707 20:53:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.707 20:53:27 -- accel/accel.sh@42 -- # jq -r . 00:10:59.707 ************************************ 00:10:59.707 START TEST accel_dif_functional_tests 00:10:59.707 ************************************ 00:10:59.707 20:53:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:59.707 [2024-06-09 20:53:27.647492] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:59.707 [2024-06-09 20:53:27.647975] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107547 ] 00:10:59.708 [2024-06-09 20:53:27.826354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.966 [2024-06-09 20:53:28.008043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.966 [2024-06-09 20:53:28.008152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.966 [2024-06-09 20:53:28.008168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.225 00:11:00.225 00:11:00.225 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.225 http://cunit.sourceforge.net/ 00:11:00.225 00:11:00.225 00:11:00.225 Suite: accel_dif 00:11:00.225 Test: verify: DIF generated, GUARD check ...passed 00:11:00.225 Test: verify: DIF generated, APPTAG check ...passed 00:11:00.225 Test: verify: DIF generated, REFTAG check ...passed 00:11:00.225 Test: verify: DIF not generated, GUARD check ...[2024-06-09 20:53:28.303139] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:00.225 [2024-06-09 20:53:28.303420] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:00.225 passed 00:11:00.225 Test: verify: DIF not generated, APPTAG check ...[2024-06-09 20:53:28.303677] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:00.225 [2024-06-09 20:53:28.303910] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:00.225 passed 00:11:00.225 Test: verify: DIF not generated, REFTAG check ...[2024-06-09 20:53:28.304357] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:00.225 [2024-06-09 20:53:28.304573] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:00.225 passed 00:11:00.225 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:00.225 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-09 20:53:28.305198] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:00.225 passed 00:11:00.225 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:00.225 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:00.225 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:00.225 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-09 20:53:28.306337] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:00.225 passed 00:11:00.225 Test: generate copy: DIF generated, GUARD check ...passed 00:11:00.225 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:00.225 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:00.225 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:00.225 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:00.225 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:00.225 Test: generate copy: iovecs-len validate ...[2024-06-09 20:53:28.308277] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:00.225 passed 00:11:00.225 Test: generate copy: buffer alignment validate ...passed 00:11:00.225 00:11:00.225 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.225 suites 1 1 n/a 0 0 00:11:00.225 tests 20 20 20 0 0 00:11:00.225 asserts 204 204 204 0 n/a 00:11:00.225 00:11:00.225 Elapsed time = 0.012 seconds 00:11:01.161 ************************************ 00:11:01.161 END TEST accel_dif_functional_tests 00:11:01.161 ************************************ 00:11:01.161 00:11:01.161 real 0m1.721s 00:11:01.161 user 0m3.260s 00:11:01.161 sys 0m0.264s 00:11:01.161 20:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.161 20:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:01.161 00:11:01.161 real 1m43.685s 00:11:01.161 user 1m53.579s 00:11:01.161 sys 0m9.384s 00:11:01.161 20:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.161 20:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:01.161 ************************************ 00:11:01.161 END TEST accel 00:11:01.161 ************************************ 00:11:01.420 20:53:29 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:01.420 20:53:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:01.420 20:53:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.420 20:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:01.420 ************************************ 00:11:01.420 START TEST accel_rpc 00:11:01.420 ************************************ 00:11:01.420 20:53:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:01.420 * Looking for test storage... 00:11:01.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:01.420 20:53:29 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:01.420 20:53:29 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=107639 00:11:01.420 20:53:29 -- accel/accel_rpc.sh@15 -- # waitforlisten 107639 00:11:01.420 20:53:29 -- common/autotest_common.sh@819 -- # '[' -z 107639 ']' 00:11:01.420 20:53:29 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:01.420 20:53:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.420 20:53:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:01.420 20:53:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.420 20:53:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:01.420 20:53:29 -- common/autotest_common.sh@10 -- # set +x 00:11:01.420 [2024-06-09 20:53:29.524967] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:01.420 [2024-06-09 20:53:29.525470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107639 ] 00:11:01.680 [2024-06-09 20:53:29.698847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.939 [2024-06-09 20:53:29.925408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:01.939 [2024-06-09 20:53:29.926034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.506 20:53:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:02.506 20:53:30 -- common/autotest_common.sh@852 -- # return 0 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:02.506 20:53:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:02.506 20:53:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.506 20:53:30 -- common/autotest_common.sh@10 -- # set +x 00:11:02.506 ************************************ 00:11:02.506 START TEST accel_assign_opcode 00:11:02.506 ************************************ 00:11:02.506 20:53:30 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:02.506 20:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.506 20:53:30 -- common/autotest_common.sh@10 -- # set +x 00:11:02.506 [2024-06-09 20:53:30.439946] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:02.506 20:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:02.506 20:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.506 20:53:30 -- common/autotest_common.sh@10 -- # set +x 00:11:02.506 [2024-06-09 20:53:30.447922] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:02.506 20:53:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.506 20:53:30 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:02.506 20:53:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.506 20:53:30 -- common/autotest_common.sh@10 -- # set +x 00:11:03.100 20:53:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.100 20:53:31 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:03.100 20:53:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.100 20:53:31 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:03.100 20:53:31 -- accel/accel_rpc.sh@42 -- # grep software 00:11:03.100 20:53:31 -- common/autotest_common.sh@10 -- # set +x 00:11:03.100 20:53:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.100 software 00:11:03.100 00:11:03.100 real 0m0.713s 00:11:03.100 user 0m0.060s 00:11:03.100 sys 0m0.001s 00:11:03.100 20:53:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.100 ************************************ 00:11:03.100 END TEST accel_assign_opcode 00:11:03.100 ************************************ 00:11:03.100 20:53:31 -- common/autotest_common.sh@10 -- # set +x 00:11:03.100 20:53:31 -- accel/accel_rpc.sh@55 -- # killprocess 107639 00:11:03.100 20:53:31 -- common/autotest_common.sh@926 -- # '[' -z 107639 ']' 00:11:03.100 20:53:31 -- common/autotest_common.sh@930 -- # kill -0 107639 00:11:03.100 20:53:31 -- common/autotest_common.sh@931 -- # uname 00:11:03.100 20:53:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:03.100 20:53:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107639 00:11:03.100 20:53:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:03.100 killing process with pid 107639 00:11:03.100 20:53:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:03.100 20:53:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107639' 00:11:03.100 20:53:31 -- common/autotest_common.sh@945 -- # kill 107639 00:11:03.100 20:53:31 -- common/autotest_common.sh@950 -- # wait 107639 00:11:05.000 00:11:05.000 real 0m3.634s 00:11:05.000 user 0m3.638s 00:11:05.000 sys 0m0.472s 00:11:05.000 20:53:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.000 ************************************ 00:11:05.000 END TEST accel_rpc 00:11:05.000 ************************************ 00:11:05.000 20:53:33 -- common/autotest_common.sh@10 -- # set +x 00:11:05.000 20:53:33 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:05.000 20:53:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:05.000 20:53:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.000 20:53:33 -- common/autotest_common.sh@10 -- # set +x 00:11:05.000 ************************************ 00:11:05.000 START TEST app_cmdline 00:11:05.000 ************************************ 00:11:05.000 20:53:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:05.000 * Looking for test storage... 00:11:05.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:05.000 20:53:33 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:05.000 20:53:33 -- app/cmdline.sh@17 -- # spdk_tgt_pid=107766 00:11:05.000 20:53:33 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:05.000 20:53:33 -- app/cmdline.sh@18 -- # waitforlisten 107766 00:11:05.000 20:53:33 -- common/autotest_common.sh@819 -- # '[' -z 107766 ']' 00:11:05.000 20:53:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.000 20:53:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:05.000 20:53:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.000 20:53:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:05.000 20:53:33 -- common/autotest_common.sh@10 -- # set +x 00:11:05.258 [2024-06-09 20:53:33.214662] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:05.258 [2024-06-09 20:53:33.215159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107766 ] 00:11:05.258 [2024-06-09 20:53:33.382065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.516 [2024-06-09 20:53:33.554660] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:05.516 [2024-06-09 20:53:33.555182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.888 20:53:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:06.888 20:53:34 -- common/autotest_common.sh@852 -- # return 0 00:11:06.888 20:53:34 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:06.888 { 00:11:06.888 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:11:06.888 "fields": { 00:11:06.888 "major": 24, 00:11:06.888 "minor": 1, 00:11:06.888 "patch": 1, 00:11:06.888 "suffix": "-pre", 00:11:06.888 "commit": "130b9406a" 00:11:06.888 } 00:11:06.888 } 00:11:06.888 20:53:34 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:06.888 20:53:34 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:06.888 20:53:34 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:06.888 20:53:34 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:06.888 20:53:35 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:06.888 20:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:06.888 20:53:35 -- common/autotest_common.sh@10 -- # set +x 00:11:06.888 20:53:35 -- app/cmdline.sh@26 -- # sort 00:11:06.888 20:53:35 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:06.888 20:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:06.888 20:53:35 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:06.888 20:53:35 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:06.888 20:53:35 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:06.888 20:53:35 -- common/autotest_common.sh@640 -- # local es=0 00:11:06.888 20:53:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:06.888 20:53:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.888 20:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:06.888 20:53:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.888 20:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:06.888 20:53:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.888 20:53:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:06.888 20:53:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.888 20:53:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:06.888 20:53:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:07.454 request: 00:11:07.454 { 00:11:07.454 "method": "env_dpdk_get_mem_stats", 00:11:07.454 "req_id": 1 00:11:07.454 } 00:11:07.454 Got JSON-RPC error response 00:11:07.454 response: 00:11:07.454 { 00:11:07.454 "code": -32601, 00:11:07.454 "message": "Method not found" 00:11:07.454 } 00:11:07.454 20:53:35 -- common/autotest_common.sh@643 -- # es=1 00:11:07.454 20:53:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:07.454 20:53:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:07.454 20:53:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:07.454 20:53:35 -- app/cmdline.sh@1 -- # killprocess 107766 00:11:07.454 20:53:35 -- common/autotest_common.sh@926 -- # '[' -z 107766 ']' 00:11:07.454 20:53:35 -- common/autotest_common.sh@930 -- # kill -0 107766 00:11:07.454 20:53:35 -- common/autotest_common.sh@931 -- # uname 00:11:07.454 20:53:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:07.454 20:53:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107766 00:11:07.454 killing process with pid 107766 00:11:07.454 20:53:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:07.454 20:53:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:07.454 20:53:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107766' 00:11:07.454 20:53:35 -- common/autotest_common.sh@945 -- # kill 107766 00:11:07.454 20:53:35 -- common/autotest_common.sh@950 -- # wait 107766 00:11:09.354 ************************************ 00:11:09.354 END TEST app_cmdline 00:11:09.354 ************************************ 00:11:09.354 00:11:09.354 real 0m4.155s 00:11:09.354 user 0m4.703s 00:11:09.354 sys 0m0.600s 00:11:09.354 20:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.354 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:09.354 20:53:37 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:09.354 20:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:09.354 20:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.354 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:09.354 ************************************ 00:11:09.354 START TEST version 00:11:09.354 ************************************ 00:11:09.354 20:53:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:09.354 * Looking for test storage... 00:11:09.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:09.354 20:53:37 -- app/version.sh@17 -- # get_header_version major 00:11:09.354 20:53:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:09.354 20:53:37 -- app/version.sh@14 -- # tr -d '"' 00:11:09.354 20:53:37 -- app/version.sh@14 -- # cut -f2 00:11:09.354 20:53:37 -- app/version.sh@17 -- # major=24 00:11:09.354 20:53:37 -- app/version.sh@18 -- # get_header_version minor 00:11:09.354 20:53:37 -- app/version.sh@14 -- # cut -f2 00:11:09.354 20:53:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:09.354 20:53:37 -- app/version.sh@14 -- # tr -d '"' 00:11:09.354 20:53:37 -- app/version.sh@18 -- # minor=1 00:11:09.354 20:53:37 -- app/version.sh@19 -- # get_header_version patch 00:11:09.354 20:53:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:09.354 20:53:37 -- app/version.sh@14 -- # cut -f2 00:11:09.354 20:53:37 -- app/version.sh@14 -- # tr -d '"' 00:11:09.354 20:53:37 -- app/version.sh@19 -- # patch=1 00:11:09.354 20:53:37 -- app/version.sh@20 -- # get_header_version suffix 00:11:09.355 20:53:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:09.355 20:53:37 -- app/version.sh@14 -- # cut -f2 00:11:09.355 20:53:37 -- app/version.sh@14 -- # tr -d '"' 00:11:09.355 20:53:37 -- app/version.sh@20 -- # suffix=-pre 00:11:09.355 20:53:37 -- app/version.sh@22 -- # version=24.1 00:11:09.355 20:53:37 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:09.355 20:53:37 -- app/version.sh@25 -- # version=24.1.1 00:11:09.355 20:53:37 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:09.355 20:53:37 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:09.355 20:53:37 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:09.355 20:53:37 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:09.355 20:53:37 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:09.355 ************************************ 00:11:09.355 END TEST version 00:11:09.355 ************************************ 00:11:09.355 00:11:09.355 real 0m0.133s 00:11:09.355 user 0m0.110s 00:11:09.355 sys 0m0.057s 00:11:09.355 20:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.355 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:09.355 20:53:37 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:11:09.355 20:53:37 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:09.355 20:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:09.355 20:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.355 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:09.355 ************************************ 00:11:09.355 START TEST blockdev_general 00:11:09.355 ************************************ 00:11:09.355 20:53:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:09.355 * Looking for test storage... 00:11:09.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:09.355 20:53:37 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:09.355 20:53:37 -- bdev/nbd_common.sh@6 -- # set -e 00:11:09.355 20:53:37 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:09.355 20:53:37 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:09.355 20:53:37 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:09.355 20:53:37 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:09.355 20:53:37 -- bdev/blockdev.sh@18 -- # : 00:11:09.355 20:53:37 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:09.355 20:53:37 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:09.355 20:53:37 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:09.613 20:53:37 -- bdev/blockdev.sh@672 -- # uname -s 00:11:09.613 20:53:37 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:09.613 20:53:37 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:09.613 20:53:37 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:09.613 20:53:37 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:09.613 20:53:37 -- bdev/blockdev.sh@682 -- # dek= 00:11:09.613 20:53:37 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:09.613 20:53:37 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:09.613 20:53:37 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:09.613 20:53:37 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:09.613 20:53:37 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:09.613 20:53:37 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:09.613 20:53:37 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=107945 00:11:09.613 20:53:37 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:09.613 20:53:37 -- bdev/blockdev.sh@47 -- # waitforlisten 107945 00:11:09.613 20:53:37 -- common/autotest_common.sh@819 -- # '[' -z 107945 ']' 00:11:09.613 20:53:37 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:09.613 20:53:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.613 20:53:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:09.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.613 20:53:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.613 20:53:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:09.613 20:53:37 -- common/autotest_common.sh@10 -- # set +x 00:11:09.613 [2024-06-09 20:53:37.616366] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:09.613 [2024-06-09 20:53:37.616571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107945 ] 00:11:09.613 [2024-06-09 20:53:37.781761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.871 [2024-06-09 20:53:37.957212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:09.871 [2024-06-09 20:53:37.957438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.437 20:53:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:10.437 20:53:38 -- common/autotest_common.sh@852 -- # return 0 00:11:10.437 20:53:38 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:10.437 20:53:38 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:10.437 20:53:38 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:10.437 20:53:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.437 20:53:38 -- common/autotest_common.sh@10 -- # set +x 00:11:11.371 [2024-06-09 20:53:39.250816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.371 [2024-06-09 20:53:39.250910] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:11.371 00:11:11.371 [2024-06-09 20:53:39.258796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.371 [2024-06-09 20:53:39.258865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:11.371 00:11:11.371 Malloc0 00:11:11.371 Malloc1 00:11:11.371 Malloc2 00:11:11.371 Malloc3 00:11:11.371 Malloc4 00:11:11.371 Malloc5 00:11:11.371 Malloc6 00:11:11.630 Malloc7 00:11:11.630 Malloc8 00:11:11.630 Malloc9 00:11:11.630 [2024-06-09 20:53:39.631507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:11.630 [2024-06-09 20:53:39.631627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.630 [2024-06-09 20:53:39.631678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:11:11.630 [2024-06-09 20:53:39.631721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.630 [2024-06-09 20:53:39.634312] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.630 [2024-06-09 20:53:39.634388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:11.630 TestPT 00:11:11.630 20:53:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.630 20:53:39 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:11.630 5000+0 records in 00:11:11.630 5000+0 records out 00:11:11.630 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0274091 s, 374 MB/s 00:11:11.630 20:53:39 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:11.630 20:53:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.630 20:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:11.630 AIO0 00:11:11.630 20:53:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.630 20:53:39 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:11.630 20:53:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.630 20:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:11.630 20:53:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.630 20:53:39 -- bdev/blockdev.sh@738 -- # cat 00:11:11.630 20:53:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:11.630 20:53:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.630 20:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:11.630 20:53:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.630 20:53:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:11.630 20:53:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.630 20:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:11.630 20:53:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.630 20:53:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:11.630 20:53:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.630 20:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:11.901 20:53:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.902 20:53:39 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:11.902 20:53:39 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:11.902 20:53:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.902 20:53:39 -- common/autotest_common.sh@10 -- # set +x 00:11:11.902 20:53:39 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:11.902 20:53:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.902 20:53:39 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:11.902 20:53:39 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:11.903 20:53:39 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ee0e517f-a178-49c5-bc6d-6d246f461f09"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee0e517f-a178-49c5-bc6d-6d246f461f09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "7fb35f0f-d07f-5d7b-a6db-670326d62e8c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7fb35f0f-d07f-5d7b-a6db-670326d62e8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "00a096ce-b601-5198-bac0-03cb2d3c6168"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "00a096ce-b601-5198-bac0-03cb2d3c6168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9ab8e275-a481-5ec7-9059-b7663e45052c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ab8e275-a481-5ec7-9059-b7663e45052c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "4585e9bc-a41b-59e1-bad8-eaea59dfcc03"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4585e9bc-a41b-59e1-bad8-eaea59dfcc03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "91e8ebf9-7d05-51ce-91de-fce4aa85611b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "91e8ebf9-7d05-51ce-91de-fce4aa85611b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f50c1585-de13-50eb-a77d-40eefa1b4f8f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f50c1585-de13-50eb-a77d-40eefa1b4f8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "1286f264-be42-5321-9331-295e73d95fbc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1286f264-be42-5321-9331-295e73d95fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "10ebd363-d479-572f-80c6-25306d724df2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10ebd363-d479-572f-80c6-25306d724df2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "b91f1683-797c-5387-b130-f25cf069c72a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b91f1683-797c-5387-b130-f25cf069c72a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a139230b-6ecb-536f-a4d4-4693523c7102"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a139230b-6ecb-536f-a4d4-4693523c7102",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "54e17f76-b807-5b26-b8ec-09dfd7a69569"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "54e17f76-b807-5b26-b8ec-09dfd7a69569",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e7d4792c-3475-42b4-a526-285f46ea2c1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e7d4792c-3475-42b4-a526-285f46ea2c1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e7d4792c-3475-42b4-a526-285f46ea2c1d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "20795a9e-f29a-4785-b3c6-6ee50b27aa7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "eae1837e-aee7-4800-9f4e-072b0b7f9972",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "a8806ac4-9b71-4a2b-aac1-190214a0d983"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a8806ac4-9b71-4a2b-aac1-190214a0d983",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a8806ac4-9b71-4a2b-aac1-190214a0d983",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6e873c7f-75c4-4556-8c41-d5168ed6a6e8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "fee382f2-cc85-4f5d-8f6a-e1cee23628dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "43f1c42e-6f61-4d1c-ac29-275cd75e5432",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "34fd1cbd-2758-46dc-9ff7-0478c563d70f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c5e97f87-d478-428b-be19-7cbef5b3c4b8"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c5e97f87-d478-428b-be19-7cbef5b3c4b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:11.903 20:53:39 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:11.903 20:53:39 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:11.903 20:53:39 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:11.903 20:53:39 -- bdev/blockdev.sh@752 -- # killprocess 107945 00:11:11.903 20:53:39 -- common/autotest_common.sh@926 -- # '[' -z 107945 ']' 00:11:11.903 20:53:39 -- common/autotest_common.sh@930 -- # kill -0 107945 00:11:11.903 20:53:39 -- common/autotest_common.sh@931 -- # uname 00:11:11.903 20:53:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:11.903 20:53:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107945 00:11:11.903 20:53:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:11.903 20:53:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:11.903 killing process with pid 107945 00:11:11.903 20:53:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107945' 00:11:11.903 20:53:39 -- common/autotest_common.sh@945 -- # kill 107945 00:11:11.903 20:53:39 -- common/autotest_common.sh@950 -- # wait 107945 00:11:14.445 20:53:42 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:14.445 20:53:42 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:14.445 20:53:42 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:14.445 20:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.445 20:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:14.445 ************************************ 00:11:14.445 START TEST bdev_hello_world 00:11:14.445 ************************************ 00:11:14.445 20:53:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:14.445 [2024-06-09 20:53:42.602063] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:14.445 [2024-06-09 20:53:42.602272] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108035 ] 00:11:14.702 [2024-06-09 20:53:42.769502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.960 [2024-06-09 20:53:42.946680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.217 [2024-06-09 20:53:43.273640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:15.217 [2024-06-09 20:53:43.273747] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:15.217 [2024-06-09 20:53:43.281616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:15.217 [2024-06-09 20:53:43.281688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:15.217 [2024-06-09 20:53:43.289636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:15.217 [2024-06-09 20:53:43.289695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:15.217 [2024-06-09 20:53:43.289727] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:15.476 [2024-06-09 20:53:43.485811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:15.476 [2024-06-09 20:53:43.485947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.476 [2024-06-09 20:53:43.486048] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:15.476 [2024-06-09 20:53:43.486086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.476 [2024-06-09 20:53:43.488616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.476 [2024-06-09 20:53:43.488700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:15.734 [2024-06-09 20:53:43.778473] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:15.734 [2024-06-09 20:53:43.778565] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:15.734 [2024-06-09 20:53:43.778629] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:15.734 [2024-06-09 20:53:43.778684] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:15.734 [2024-06-09 20:53:43.778763] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:15.734 [2024-06-09 20:53:43.778791] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:15.735 [2024-06-09 20:53:43.778848] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:15.735 00:11:15.735 [2024-06-09 20:53:43.778891] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:17.638 00:11:17.638 real 0m2.931s 00:11:17.638 user 0m2.400s 00:11:17.638 sys 0m0.373s 00:11:17.638 20:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.638 20:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:17.638 ************************************ 00:11:17.638 END TEST bdev_hello_world 00:11:17.638 ************************************ 00:11:17.638 20:53:45 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:17.638 20:53:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:17.638 20:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:17.638 20:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:17.638 ************************************ 00:11:17.638 START TEST bdev_bounds 00:11:17.638 ************************************ 00:11:17.638 20:53:45 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:11:17.638 20:53:45 -- bdev/blockdev.sh@288 -- # bdevio_pid=108094 00:11:17.638 20:53:45 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:17.638 20:53:45 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:17.638 Process bdevio pid: 108094 00:11:17.638 20:53:45 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 108094' 00:11:17.638 20:53:45 -- bdev/blockdev.sh@291 -- # waitforlisten 108094 00:11:17.638 20:53:45 -- common/autotest_common.sh@819 -- # '[' -z 108094 ']' 00:11:17.638 20:53:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.638 20:53:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:17.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.638 20:53:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.638 20:53:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:17.638 20:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:17.638 [2024-06-09 20:53:45.570563] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:17.638 [2024-06-09 20:53:45.570728] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108094 ] 00:11:17.638 [2024-06-09 20:53:45.739369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.896 [2024-06-09 20:53:45.929306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.896 [2024-06-09 20:53:45.929421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.896 [2024-06-09 20:53:45.929426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.155 [2024-06-09 20:53:46.270530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:18.155 [2024-06-09 20:53:46.270616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:18.155 [2024-06-09 20:53:46.278486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:18.156 [2024-06-09 20:53:46.278568] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:18.156 [2024-06-09 20:53:46.286537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:18.156 [2024-06-09 20:53:46.286589] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:18.156 [2024-06-09 20:53:46.286612] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:18.414 [2024-06-09 20:53:46.471133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:18.414 [2024-06-09 20:53:46.471301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.414 [2024-06-09 20:53:46.471370] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:18.414 [2024-06-09 20:53:46.471426] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.414 [2024-06-09 20:53:46.474209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.415 [2024-06-09 20:53:46.474254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:19.351 20:53:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:19.351 20:53:47 -- common/autotest_common.sh@852 -- # return 0 00:11:19.351 20:53:47 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:19.351 I/O targets: 00:11:19.351 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:19.351 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:19.351 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:19.351 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:19.351 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:19.351 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:19.351 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:19.351 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:19.351 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:19.351 00:11:19.351 00:11:19.351 CUnit - A unit testing framework for C - Version 2.1-3 00:11:19.351 http://cunit.sourceforge.net/ 00:11:19.351 00:11:19.351 00:11:19.351 Suite: bdevio tests on: AIO0 00:11:19.351 Test: blockdev write read block ...passed 00:11:19.351 Test: blockdev write zeroes read block ...passed 00:11:19.351 Test: blockdev write zeroes read no split ...passed 00:11:19.351 Test: blockdev write zeroes read split ...passed 00:11:19.352 Test: blockdev write zeroes read split partial ...passed 00:11:19.352 Test: blockdev reset ...passed 00:11:19.352 Test: blockdev write read 8 blocks ...passed 00:11:19.352 Test: blockdev write read size > 128k ...passed 00:11:19.352 Test: blockdev write read invalid size ...passed 00:11:19.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.352 Test: blockdev write read max offset ...passed 00:11:19.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.352 Test: blockdev writev readv 8 blocks ...passed 00:11:19.352 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.352 Test: blockdev writev readv block ...passed 00:11:19.352 Test: blockdev writev readv size > 128k ...passed 00:11:19.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.352 Test: blockdev comparev and writev ...passed 00:11:19.352 Test: blockdev nvme passthru rw ...passed 00:11:19.352 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.352 Test: blockdev nvme admin passthru ...passed 00:11:19.352 Test: blockdev copy ...passed 00:11:19.352 Suite: bdevio tests on: raid1 00:11:19.352 Test: blockdev write read block ...passed 00:11:19.352 Test: blockdev write zeroes read block ...passed 00:11:19.352 Test: blockdev write zeroes read no split ...passed 00:11:19.352 Test: blockdev write zeroes read split ...passed 00:11:19.352 Test: blockdev write zeroes read split partial ...passed 00:11:19.352 Test: blockdev reset ...passed 00:11:19.352 Test: blockdev write read 8 blocks ...passed 00:11:19.352 Test: blockdev write read size > 128k ...passed 00:11:19.352 Test: blockdev write read invalid size ...passed 00:11:19.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.352 Test: blockdev write read max offset ...passed 00:11:19.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.352 Test: blockdev writev readv 8 blocks ...passed 00:11:19.352 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.352 Test: blockdev writev readv block ...passed 00:11:19.352 Test: blockdev writev readv size > 128k ...passed 00:11:19.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.352 Test: blockdev comparev and writev ...passed 00:11:19.352 Test: blockdev nvme passthru rw ...passed 00:11:19.352 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.352 Test: blockdev nvme admin passthru ...passed 00:11:19.352 Test: blockdev copy ...passed 00:11:19.352 Suite: bdevio tests on: concat0 00:11:19.352 Test: blockdev write read block ...passed 00:11:19.352 Test: blockdev write zeroes read block ...passed 00:11:19.352 Test: blockdev write zeroes read no split ...passed 00:11:19.352 Test: blockdev write zeroes read split ...passed 00:11:19.352 Test: blockdev write zeroes read split partial ...passed 00:11:19.352 Test: blockdev reset ...passed 00:11:19.352 Test: blockdev write read 8 blocks ...passed 00:11:19.352 Test: blockdev write read size > 128k ...passed 00:11:19.352 Test: blockdev write read invalid size ...passed 00:11:19.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.352 Test: blockdev write read max offset ...passed 00:11:19.352 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.352 Test: blockdev writev readv 8 blocks ...passed 00:11:19.352 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.352 Test: blockdev writev readv block ...passed 00:11:19.352 Test: blockdev writev readv size > 128k ...passed 00:11:19.352 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.352 Test: blockdev comparev and writev ...passed 00:11:19.352 Test: blockdev nvme passthru rw ...passed 00:11:19.352 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.352 Test: blockdev nvme admin passthru ...passed 00:11:19.352 Test: blockdev copy ...passed 00:11:19.352 Suite: bdevio tests on: raid0 00:11:19.352 Test: blockdev write read block ...passed 00:11:19.352 Test: blockdev write zeroes read block ...passed 00:11:19.352 Test: blockdev write zeroes read no split ...passed 00:11:19.352 Test: blockdev write zeroes read split ...passed 00:11:19.611 Test: blockdev write zeroes read split partial ...passed 00:11:19.611 Test: blockdev reset ...passed 00:11:19.611 Test: blockdev write read 8 blocks ...passed 00:11:19.611 Test: blockdev write read size > 128k ...passed 00:11:19.611 Test: blockdev write read invalid size ...passed 00:11:19.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.611 Test: blockdev write read max offset ...passed 00:11:19.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.611 Test: blockdev writev readv 8 blocks ...passed 00:11:19.611 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.611 Test: blockdev writev readv block ...passed 00:11:19.611 Test: blockdev writev readv size > 128k ...passed 00:11:19.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.611 Test: blockdev comparev and writev ...passed 00:11:19.611 Test: blockdev nvme passthru rw ...passed 00:11:19.611 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.611 Test: blockdev nvme admin passthru ...passed 00:11:19.611 Test: blockdev copy ...passed 00:11:19.611 Suite: bdevio tests on: TestPT 00:11:19.611 Test: blockdev write read block ...passed 00:11:19.611 Test: blockdev write zeroes read block ...passed 00:11:19.611 Test: blockdev write zeroes read no split ...passed 00:11:19.611 Test: blockdev write zeroes read split ...passed 00:11:19.611 Test: blockdev write zeroes read split partial ...passed 00:11:19.611 Test: blockdev reset ...passed 00:11:19.611 Test: blockdev write read 8 blocks ...passed 00:11:19.611 Test: blockdev write read size > 128k ...passed 00:11:19.611 Test: blockdev write read invalid size ...passed 00:11:19.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.611 Test: blockdev write read max offset ...passed 00:11:19.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.611 Test: blockdev writev readv 8 blocks ...passed 00:11:19.611 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.611 Test: blockdev writev readv block ...passed 00:11:19.611 Test: blockdev writev readv size > 128k ...passed 00:11:19.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.611 Test: blockdev comparev and writev ...passed 00:11:19.611 Test: blockdev nvme passthru rw ...passed 00:11:19.611 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.611 Test: blockdev nvme admin passthru ...passed 00:11:19.611 Test: blockdev copy ...passed 00:11:19.611 Suite: bdevio tests on: Malloc2p7 00:11:19.611 Test: blockdev write read block ...passed 00:11:19.611 Test: blockdev write zeroes read block ...passed 00:11:19.611 Test: blockdev write zeroes read no split ...passed 00:11:19.611 Test: blockdev write zeroes read split ...passed 00:11:19.611 Test: blockdev write zeroes read split partial ...passed 00:11:19.611 Test: blockdev reset ...passed 00:11:19.611 Test: blockdev write read 8 blocks ...passed 00:11:19.611 Test: blockdev write read size > 128k ...passed 00:11:19.611 Test: blockdev write read invalid size ...passed 00:11:19.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.611 Test: blockdev write read max offset ...passed 00:11:19.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.611 Test: blockdev writev readv 8 blocks ...passed 00:11:19.611 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.611 Test: blockdev writev readv block ...passed 00:11:19.611 Test: blockdev writev readv size > 128k ...passed 00:11:19.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.611 Test: blockdev comparev and writev ...passed 00:11:19.611 Test: blockdev nvme passthru rw ...passed 00:11:19.611 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.611 Test: blockdev nvme admin passthru ...passed 00:11:19.611 Test: blockdev copy ...passed 00:11:19.611 Suite: bdevio tests on: Malloc2p6 00:11:19.611 Test: blockdev write read block ...passed 00:11:19.611 Test: blockdev write zeroes read block ...passed 00:11:19.611 Test: blockdev write zeroes read no split ...passed 00:11:19.611 Test: blockdev write zeroes read split ...passed 00:11:19.611 Test: blockdev write zeroes read split partial ...passed 00:11:19.611 Test: blockdev reset ...passed 00:11:19.611 Test: blockdev write read 8 blocks ...passed 00:11:19.611 Test: blockdev write read size > 128k ...passed 00:11:19.611 Test: blockdev write read invalid size ...passed 00:11:19.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.611 Test: blockdev write read max offset ...passed 00:11:19.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.611 Test: blockdev writev readv 8 blocks ...passed 00:11:19.611 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.611 Test: blockdev writev readv block ...passed 00:11:19.611 Test: blockdev writev readv size > 128k ...passed 00:11:19.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.611 Test: blockdev comparev and writev ...passed 00:11:19.611 Test: blockdev nvme passthru rw ...passed 00:11:19.611 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.611 Test: blockdev nvme admin passthru ...passed 00:11:19.611 Test: blockdev copy ...passed 00:11:19.611 Suite: bdevio tests on: Malloc2p5 00:11:19.611 Test: blockdev write read block ...passed 00:11:19.611 Test: blockdev write zeroes read block ...passed 00:11:19.611 Test: blockdev write zeroes read no split ...passed 00:11:19.611 Test: blockdev write zeroes read split ...passed 00:11:19.611 Test: blockdev write zeroes read split partial ...passed 00:11:19.611 Test: blockdev reset ...passed 00:11:19.611 Test: blockdev write read 8 blocks ...passed 00:11:19.611 Test: blockdev write read size > 128k ...passed 00:11:19.611 Test: blockdev write read invalid size ...passed 00:11:19.611 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.611 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.611 Test: blockdev write read max offset ...passed 00:11:19.611 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.611 Test: blockdev writev readv 8 blocks ...passed 00:11:19.611 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.611 Test: blockdev writev readv block ...passed 00:11:19.611 Test: blockdev writev readv size > 128k ...passed 00:11:19.611 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.611 Test: blockdev comparev and writev ...passed 00:11:19.611 Test: blockdev nvme passthru rw ...passed 00:11:19.611 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.611 Test: blockdev nvme admin passthru ...passed 00:11:19.611 Test: blockdev copy ...passed 00:11:19.611 Suite: bdevio tests on: Malloc2p4 00:11:19.611 Test: blockdev write read block ...passed 00:11:19.611 Test: blockdev write zeroes read block ...passed 00:11:19.611 Test: blockdev write zeroes read no split ...passed 00:11:19.871 Test: blockdev write zeroes read split ...passed 00:11:19.871 Test: blockdev write zeroes read split partial ...passed 00:11:19.871 Test: blockdev reset ...passed 00:11:19.871 Test: blockdev write read 8 blocks ...passed 00:11:19.871 Test: blockdev write read size > 128k ...passed 00:11:19.871 Test: blockdev write read invalid size ...passed 00:11:19.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.871 Test: blockdev write read max offset ...passed 00:11:19.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.871 Test: blockdev writev readv 8 blocks ...passed 00:11:19.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.871 Test: blockdev writev readv block ...passed 00:11:19.871 Test: blockdev writev readv size > 128k ...passed 00:11:19.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.871 Test: blockdev comparev and writev ...passed 00:11:19.871 Test: blockdev nvme passthru rw ...passed 00:11:19.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.871 Test: blockdev nvme admin passthru ...passed 00:11:19.871 Test: blockdev copy ...passed 00:11:19.871 Suite: bdevio tests on: Malloc2p3 00:11:19.871 Test: blockdev write read block ...passed 00:11:19.871 Test: blockdev write zeroes read block ...passed 00:11:19.871 Test: blockdev write zeroes read no split ...passed 00:11:19.871 Test: blockdev write zeroes read split ...passed 00:11:19.871 Test: blockdev write zeroes read split partial ...passed 00:11:19.871 Test: blockdev reset ...passed 00:11:19.871 Test: blockdev write read 8 blocks ...passed 00:11:19.871 Test: blockdev write read size > 128k ...passed 00:11:19.871 Test: blockdev write read invalid size ...passed 00:11:19.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.871 Test: blockdev write read max offset ...passed 00:11:19.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.871 Test: blockdev writev readv 8 blocks ...passed 00:11:19.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.871 Test: blockdev writev readv block ...passed 00:11:19.871 Test: blockdev writev readv size > 128k ...passed 00:11:19.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.871 Test: blockdev comparev and writev ...passed 00:11:19.871 Test: blockdev nvme passthru rw ...passed 00:11:19.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.871 Test: blockdev nvme admin passthru ...passed 00:11:19.871 Test: blockdev copy ...passed 00:11:19.871 Suite: bdevio tests on: Malloc2p2 00:11:19.871 Test: blockdev write read block ...passed 00:11:19.871 Test: blockdev write zeroes read block ...passed 00:11:19.871 Test: blockdev write zeroes read no split ...passed 00:11:19.871 Test: blockdev write zeroes read split ...passed 00:11:19.871 Test: blockdev write zeroes read split partial ...passed 00:11:19.871 Test: blockdev reset ...passed 00:11:19.871 Test: blockdev write read 8 blocks ...passed 00:11:19.871 Test: blockdev write read size > 128k ...passed 00:11:19.871 Test: blockdev write read invalid size ...passed 00:11:19.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.871 Test: blockdev write read max offset ...passed 00:11:19.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.871 Test: blockdev writev readv 8 blocks ...passed 00:11:19.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.871 Test: blockdev writev readv block ...passed 00:11:19.871 Test: blockdev writev readv size > 128k ...passed 00:11:19.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.871 Test: blockdev comparev and writev ...passed 00:11:19.871 Test: blockdev nvme passthru rw ...passed 00:11:19.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.871 Test: blockdev nvme admin passthru ...passed 00:11:19.871 Test: blockdev copy ...passed 00:11:19.871 Suite: bdevio tests on: Malloc2p1 00:11:19.871 Test: blockdev write read block ...passed 00:11:19.871 Test: blockdev write zeroes read block ...passed 00:11:19.871 Test: blockdev write zeroes read no split ...passed 00:11:19.871 Test: blockdev write zeroes read split ...passed 00:11:19.871 Test: blockdev write zeroes read split partial ...passed 00:11:19.871 Test: blockdev reset ...passed 00:11:19.871 Test: blockdev write read 8 blocks ...passed 00:11:19.871 Test: blockdev write read size > 128k ...passed 00:11:19.871 Test: blockdev write read invalid size ...passed 00:11:19.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.871 Test: blockdev write read max offset ...passed 00:11:19.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.871 Test: blockdev writev readv 8 blocks ...passed 00:11:19.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.871 Test: blockdev writev readv block ...passed 00:11:19.871 Test: blockdev writev readv size > 128k ...passed 00:11:19.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.871 Test: blockdev comparev and writev ...passed 00:11:19.871 Test: blockdev nvme passthru rw ...passed 00:11:19.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.871 Test: blockdev nvme admin passthru ...passed 00:11:19.871 Test: blockdev copy ...passed 00:11:19.871 Suite: bdevio tests on: Malloc2p0 00:11:19.871 Test: blockdev write read block ...passed 00:11:19.871 Test: blockdev write zeroes read block ...passed 00:11:19.871 Test: blockdev write zeroes read no split ...passed 00:11:19.871 Test: blockdev write zeroes read split ...passed 00:11:19.871 Test: blockdev write zeroes read split partial ...passed 00:11:19.871 Test: blockdev reset ...passed 00:11:19.871 Test: blockdev write read 8 blocks ...passed 00:11:19.871 Test: blockdev write read size > 128k ...passed 00:11:19.871 Test: blockdev write read invalid size ...passed 00:11:19.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:19.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:19.871 Test: blockdev write read max offset ...passed 00:11:19.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:19.871 Test: blockdev writev readv 8 blocks ...passed 00:11:19.871 Test: blockdev writev readv 30 x 1block ...passed 00:11:19.871 Test: blockdev writev readv block ...passed 00:11:19.871 Test: blockdev writev readv size > 128k ...passed 00:11:19.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:19.871 Test: blockdev comparev and writev ...passed 00:11:19.871 Test: blockdev nvme passthru rw ...passed 00:11:19.871 Test: blockdev nvme passthru vendor specific ...passed 00:11:19.871 Test: blockdev nvme admin passthru ...passed 00:11:19.871 Test: blockdev copy ...passed 00:11:19.871 Suite: bdevio tests on: Malloc1p1 00:11:19.871 Test: blockdev write read block ...passed 00:11:19.871 Test: blockdev write zeroes read block ...passed 00:11:19.871 Test: blockdev write zeroes read no split ...passed 00:11:19.871 Test: blockdev write zeroes read split ...passed 00:11:20.130 Test: blockdev write zeroes read split partial ...passed 00:11:20.130 Test: blockdev reset ...passed 00:11:20.130 Test: blockdev write read 8 blocks ...passed 00:11:20.130 Test: blockdev write read size > 128k ...passed 00:11:20.130 Test: blockdev write read invalid size ...passed 00:11:20.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.130 Test: blockdev write read max offset ...passed 00:11:20.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.130 Test: blockdev writev readv 8 blocks ...passed 00:11:20.130 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.130 Test: blockdev writev readv block ...passed 00:11:20.130 Test: blockdev writev readv size > 128k ...passed 00:11:20.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.130 Test: blockdev comparev and writev ...passed 00:11:20.130 Test: blockdev nvme passthru rw ...passed 00:11:20.130 Test: blockdev nvme passthru vendor specific ...passed 00:11:20.130 Test: blockdev nvme admin passthru ...passed 00:11:20.130 Test: blockdev copy ...passed 00:11:20.130 Suite: bdevio tests on: Malloc1p0 00:11:20.130 Test: blockdev write read block ...passed 00:11:20.130 Test: blockdev write zeroes read block ...passed 00:11:20.130 Test: blockdev write zeroes read no split ...passed 00:11:20.130 Test: blockdev write zeroes read split ...passed 00:11:20.130 Test: blockdev write zeroes read split partial ...passed 00:11:20.130 Test: blockdev reset ...passed 00:11:20.130 Test: blockdev write read 8 blocks ...passed 00:11:20.130 Test: blockdev write read size > 128k ...passed 00:11:20.130 Test: blockdev write read invalid size ...passed 00:11:20.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.130 Test: blockdev write read max offset ...passed 00:11:20.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.130 Test: blockdev writev readv 8 blocks ...passed 00:11:20.130 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.130 Test: blockdev writev readv block ...passed 00:11:20.130 Test: blockdev writev readv size > 128k ...passed 00:11:20.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.130 Test: blockdev comparev and writev ...passed 00:11:20.130 Test: blockdev nvme passthru rw ...passed 00:11:20.130 Test: blockdev nvme passthru vendor specific ...passed 00:11:20.130 Test: blockdev nvme admin passthru ...passed 00:11:20.130 Test: blockdev copy ...passed 00:11:20.130 Suite: bdevio tests on: Malloc0 00:11:20.131 Test: blockdev write read block ...passed 00:11:20.131 Test: blockdev write zeroes read block ...passed 00:11:20.131 Test: blockdev write zeroes read no split ...passed 00:11:20.131 Test: blockdev write zeroes read split ...passed 00:11:20.131 Test: blockdev write zeroes read split partial ...passed 00:11:20.131 Test: blockdev reset ...passed 00:11:20.131 Test: blockdev write read 8 blocks ...passed 00:11:20.131 Test: blockdev write read size > 128k ...passed 00:11:20.131 Test: blockdev write read invalid size ...passed 00:11:20.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.131 Test: blockdev write read max offset ...passed 00:11:20.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.131 Test: blockdev writev readv 8 blocks ...passed 00:11:20.131 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.131 Test: blockdev writev readv block ...passed 00:11:20.131 Test: blockdev writev readv size > 128k ...passed 00:11:20.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.131 Test: blockdev comparev and writev ...passed 00:11:20.131 Test: blockdev nvme passthru rw ...passed 00:11:20.131 Test: blockdev nvme passthru vendor specific ...passed 00:11:20.131 Test: blockdev nvme admin passthru ...passed 00:11:20.131 Test: blockdev copy ...passed 00:11:20.131 00:11:20.131 Run Summary: Type Total Ran Passed Failed Inactive 00:11:20.131 suites 16 16 n/a 0 0 00:11:20.131 tests 368 368 368 0 0 00:11:20.131 asserts 2224 2224 2224 0 n/a 00:11:20.131 00:11:20.131 Elapsed time = 2.477 seconds 00:11:20.131 0 00:11:20.131 20:53:48 -- bdev/blockdev.sh@293 -- # killprocess 108094 00:11:20.131 20:53:48 -- common/autotest_common.sh@926 -- # '[' -z 108094 ']' 00:11:20.131 20:53:48 -- common/autotest_common.sh@930 -- # kill -0 108094 00:11:20.131 20:53:48 -- common/autotest_common.sh@931 -- # uname 00:11:20.131 20:53:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:20.131 20:53:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108094 00:11:20.131 20:53:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:20.131 20:53:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:20.131 killing process with pid 108094 00:11:20.131 20:53:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108094' 00:11:20.131 20:53:48 -- common/autotest_common.sh@945 -- # kill 108094 00:11:20.131 20:53:48 -- common/autotest_common.sh@950 -- # wait 108094 00:11:22.055 20:53:49 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:22.055 00:11:22.055 real 0m4.307s 00:11:22.055 user 0m11.201s 00:11:22.055 sys 0m0.550s 00:11:22.055 20:53:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.055 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:22.055 ************************************ 00:11:22.055 END TEST bdev_bounds 00:11:22.055 ************************************ 00:11:22.055 20:53:49 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:22.055 20:53:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:22.055 20:53:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.055 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:22.055 ************************************ 00:11:22.055 START TEST bdev_nbd 00:11:22.055 ************************************ 00:11:22.055 20:53:49 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:22.055 20:53:49 -- bdev/blockdev.sh@298 -- # uname -s 00:11:22.055 20:53:49 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:22.055 20:53:49 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.055 20:53:49 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:22.055 20:53:49 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:22.055 20:53:49 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:22.055 20:53:49 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:22.055 20:53:49 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:22.055 20:53:49 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:22.055 20:53:49 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:22.055 20:53:49 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:22.055 20:53:49 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:22.055 20:53:49 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:22.055 20:53:49 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:22.055 20:53:49 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:22.055 20:53:49 -- bdev/blockdev.sh@316 -- # nbd_pid=108183 00:11:22.055 20:53:49 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:22.055 20:53:49 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:22.055 20:53:49 -- bdev/blockdev.sh@318 -- # waitforlisten 108183 /var/tmp/spdk-nbd.sock 00:11:22.055 20:53:49 -- common/autotest_common.sh@819 -- # '[' -z 108183 ']' 00:11:22.055 20:53:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:22.055 20:53:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:22.056 20:53:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:22.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:22.056 20:53:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:22.056 20:53:49 -- common/autotest_common.sh@10 -- # set +x 00:11:22.056 [2024-06-09 20:53:49.939533] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:22.056 [2024-06-09 20:53:49.939713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.056 [2024-06-09 20:53:50.090527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.314 [2024-06-09 20:53:50.276034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.573 [2024-06-09 20:53:50.613123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:22.573 [2024-06-09 20:53:50.613249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:22.573 [2024-06-09 20:53:50.621105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:22.573 [2024-06-09 20:53:50.621213] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:22.573 [2024-06-09 20:53:50.629101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:22.573 [2024-06-09 20:53:50.629167] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:22.573 [2024-06-09 20:53:50.629212] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:22.831 [2024-06-09 20:53:50.829115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:22.831 [2024-06-09 20:53:50.829302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.831 [2024-06-09 20:53:50.829362] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:22.831 [2024-06-09 20:53:50.829393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.831 [2024-06-09 20:53:50.831791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.831 [2024-06-09 20:53:50.831868] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:23.767 20:53:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:23.767 20:53:51 -- common/autotest_common.sh@852 -- # return 0 00:11:23.767 20:53:51 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@24 -- # local i 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:23.767 20:53:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:23.767 20:53:51 -- common/autotest_common.sh@857 -- # local i 00:11:23.767 20:53:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:23.767 20:53:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:23.767 20:53:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:23.767 20:53:51 -- common/autotest_common.sh@861 -- # break 00:11:23.767 20:53:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:23.767 20:53:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:23.767 20:53:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.767 1+0 records in 00:11:23.767 1+0 records out 00:11:23.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031818 s, 12.9 MB/s 00:11:23.767 20:53:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.767 20:53:51 -- common/autotest_common.sh@874 -- # size=4096 00:11:23.767 20:53:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.767 20:53:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:23.767 20:53:51 -- common/autotest_common.sh@877 -- # return 0 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:23.767 20:53:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:24.026 20:53:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:24.026 20:53:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:24.026 20:53:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:24.026 20:53:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:24.026 20:53:52 -- common/autotest_common.sh@857 -- # local i 00:11:24.026 20:53:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:24.026 20:53:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:24.026 20:53:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:24.026 20:53:52 -- common/autotest_common.sh@861 -- # break 00:11:24.026 20:53:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:24.026 20:53:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:24.026 20:53:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.026 1+0 records in 00:11:24.026 1+0 records out 00:11:24.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346397 s, 11.8 MB/s 00:11:24.026 20:53:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.026 20:53:52 -- common/autotest_common.sh@874 -- # size=4096 00:11:24.026 20:53:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.026 20:53:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:24.026 20:53:52 -- common/autotest_common.sh@877 -- # return 0 00:11:24.026 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:24.026 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:24.026 20:53:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:24.284 20:53:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:24.284 20:53:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:24.284 20:53:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:24.284 20:53:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:24.284 20:53:52 -- common/autotest_common.sh@857 -- # local i 00:11:24.284 20:53:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:24.284 20:53:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:24.284 20:53:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:24.284 20:53:52 -- common/autotest_common.sh@861 -- # break 00:11:24.284 20:53:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:24.284 20:53:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:24.284 20:53:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.284 1+0 records in 00:11:24.284 1+0 records out 00:11:24.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296876 s, 13.8 MB/s 00:11:24.284 20:53:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.284 20:53:52 -- common/autotest_common.sh@874 -- # size=4096 00:11:24.284 20:53:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.284 20:53:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:24.284 20:53:52 -- common/autotest_common.sh@877 -- # return 0 00:11:24.284 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:24.284 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:24.284 20:53:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:24.542 20:53:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:24.542 20:53:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:24.542 20:53:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:24.542 20:53:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:24.542 20:53:52 -- common/autotest_common.sh@857 -- # local i 00:11:24.542 20:53:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:24.542 20:53:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:24.542 20:53:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:24.543 20:53:52 -- common/autotest_common.sh@861 -- # break 00:11:24.543 20:53:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:24.543 20:53:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:24.543 20:53:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.543 1+0 records in 00:11:24.543 1+0 records out 00:11:24.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303129 s, 13.5 MB/s 00:11:24.543 20:53:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.543 20:53:52 -- common/autotest_common.sh@874 -- # size=4096 00:11:24.543 20:53:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.543 20:53:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:24.543 20:53:52 -- common/autotest_common.sh@877 -- # return 0 00:11:24.543 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:24.801 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:24.801 20:53:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:24.801 20:53:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:24.801 20:53:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:24.801 20:53:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:24.801 20:53:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:24.801 20:53:52 -- common/autotest_common.sh@857 -- # local i 00:11:24.801 20:53:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:24.801 20:53:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:24.801 20:53:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:24.801 20:53:52 -- common/autotest_common.sh@861 -- # break 00:11:24.802 20:53:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:24.802 20:53:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:24.802 20:53:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.802 1+0 records in 00:11:24.802 1+0 records out 00:11:24.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803197 s, 5.1 MB/s 00:11:24.802 20:53:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.802 20:53:52 -- common/autotest_common.sh@874 -- # size=4096 00:11:24.802 20:53:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.802 20:53:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:24.802 20:53:52 -- common/autotest_common.sh@877 -- # return 0 00:11:24.802 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:24.802 20:53:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:24.802 20:53:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:25.060 20:53:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:25.060 20:53:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:25.060 20:53:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:25.060 20:53:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:25.060 20:53:53 -- common/autotest_common.sh@857 -- # local i 00:11:25.060 20:53:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:25.060 20:53:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:25.060 20:53:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:25.060 20:53:53 -- common/autotest_common.sh@861 -- # break 00:11:25.060 20:53:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:25.060 20:53:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:25.060 20:53:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.060 1+0 records in 00:11:25.060 1+0 records out 00:11:25.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355397 s, 11.5 MB/s 00:11:25.060 20:53:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.060 20:53:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:25.060 20:53:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.060 20:53:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:25.060 20:53:53 -- common/autotest_common.sh@877 -- # return 0 00:11:25.060 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.060 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:25.060 20:53:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:25.318 20:53:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:25.318 20:53:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:25.318 20:53:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:25.318 20:53:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:25.318 20:53:53 -- common/autotest_common.sh@857 -- # local i 00:11:25.318 20:53:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:25.318 20:53:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:25.318 20:53:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:25.318 20:53:53 -- common/autotest_common.sh@861 -- # break 00:11:25.318 20:53:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:25.318 20:53:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:25.318 20:53:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.318 1+0 records in 00:11:25.318 1+0 records out 00:11:25.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847539 s, 4.8 MB/s 00:11:25.318 20:53:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.318 20:53:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:25.318 20:53:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.318 20:53:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:25.318 20:53:53 -- common/autotest_common.sh@877 -- # return 0 00:11:25.318 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.318 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:25.318 20:53:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:25.576 20:53:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:25.576 20:53:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:25.576 20:53:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:25.576 20:53:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:25.576 20:53:53 -- common/autotest_common.sh@857 -- # local i 00:11:25.576 20:53:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:25.576 20:53:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:25.576 20:53:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:25.576 20:53:53 -- common/autotest_common.sh@861 -- # break 00:11:25.576 20:53:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:25.576 20:53:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:25.576 20:53:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.577 1+0 records in 00:11:25.577 1+0 records out 00:11:25.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00082957 s, 4.9 MB/s 00:11:25.577 20:53:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.577 20:53:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:25.577 20:53:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.577 20:53:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:25.577 20:53:53 -- common/autotest_common.sh@877 -- # return 0 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:25.835 20:53:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:25.835 20:53:53 -- common/autotest_common.sh@857 -- # local i 00:11:25.835 20:53:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:25.835 20:53:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:25.835 20:53:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:25.835 20:53:53 -- common/autotest_common.sh@861 -- # break 00:11:25.835 20:53:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:25.835 20:53:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:25.835 20:53:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.835 1+0 records in 00:11:25.835 1+0 records out 00:11:25.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011672 s, 3.5 MB/s 00:11:25.835 20:53:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.835 20:53:53 -- common/autotest_common.sh@874 -- # size=4096 00:11:25.835 20:53:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.835 20:53:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:25.835 20:53:53 -- common/autotest_common.sh@877 -- # return 0 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:25.835 20:53:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:26.401 20:53:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:26.401 20:53:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:26.401 20:53:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:26.401 20:53:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:26.401 20:53:54 -- common/autotest_common.sh@857 -- # local i 00:11:26.401 20:53:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:26.401 20:53:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:26.401 20:53:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:26.401 20:53:54 -- common/autotest_common.sh@861 -- # break 00:11:26.401 20:53:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:26.401 20:53:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:26.401 20:53:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.401 1+0 records in 00:11:26.401 1+0 records out 00:11:26.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712553 s, 5.7 MB/s 00:11:26.401 20:53:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.401 20:53:54 -- common/autotest_common.sh@874 -- # size=4096 00:11:26.401 20:53:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.401 20:53:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:26.401 20:53:54 -- common/autotest_common.sh@877 -- # return 0 00:11:26.401 20:53:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.401 20:53:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.401 20:53:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:26.660 20:53:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:26.660 20:53:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:26.660 20:53:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:26.660 20:53:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:26.660 20:53:54 -- common/autotest_common.sh@857 -- # local i 00:11:26.660 20:53:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:26.660 20:53:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:26.660 20:53:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:26.660 20:53:54 -- common/autotest_common.sh@861 -- # break 00:11:26.660 20:53:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:26.660 20:53:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:26.660 20:53:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.660 1+0 records in 00:11:26.660 1+0 records out 00:11:26.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730781 s, 5.6 MB/s 00:11:26.660 20:53:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.660 20:53:54 -- common/autotest_common.sh@874 -- # size=4096 00:11:26.660 20:53:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.660 20:53:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:26.660 20:53:54 -- common/autotest_common.sh@877 -- # return 0 00:11:26.660 20:53:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.660 20:53:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.660 20:53:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:26.918 20:53:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:26.918 20:53:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:26.918 20:53:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:26.918 20:53:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:26.918 20:53:54 -- common/autotest_common.sh@857 -- # local i 00:11:26.918 20:53:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:26.918 20:53:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:26.918 20:53:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:26.918 20:53:54 -- common/autotest_common.sh@861 -- # break 00:11:26.918 20:53:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:26.918 20:53:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:26.918 20:53:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.918 1+0 records in 00:11:26.918 1+0 records out 00:11:26.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681418 s, 6.0 MB/s 00:11:26.918 20:53:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.918 20:53:54 -- common/autotest_common.sh@874 -- # size=4096 00:11:26.918 20:53:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.918 20:53:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:26.918 20:53:54 -- common/autotest_common.sh@877 -- # return 0 00:11:26.918 20:53:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.918 20:53:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.918 20:53:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:27.177 20:53:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:27.177 20:53:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:27.177 20:53:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:27.177 20:53:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:27.177 20:53:55 -- common/autotest_common.sh@857 -- # local i 00:11:27.177 20:53:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:27.177 20:53:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:27.177 20:53:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:27.177 20:53:55 -- common/autotest_common.sh@861 -- # break 00:11:27.177 20:53:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:27.177 20:53:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:27.177 20:53:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.177 1+0 records in 00:11:27.177 1+0 records out 00:11:27.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104326 s, 3.9 MB/s 00:11:27.177 20:53:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.177 20:53:55 -- common/autotest_common.sh@874 -- # size=4096 00:11:27.177 20:53:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.177 20:53:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:27.177 20:53:55 -- common/autotest_common.sh@877 -- # return 0 00:11:27.177 20:53:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.177 20:53:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.177 20:53:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:27.435 20:53:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:27.435 20:53:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:27.435 20:53:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:27.435 20:53:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:27.435 20:53:55 -- common/autotest_common.sh@857 -- # local i 00:11:27.435 20:53:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:27.435 20:53:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:27.435 20:53:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:27.435 20:53:55 -- common/autotest_common.sh@861 -- # break 00:11:27.435 20:53:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:27.435 20:53:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:27.435 20:53:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.435 1+0 records in 00:11:27.435 1+0 records out 00:11:27.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129853 s, 3.2 MB/s 00:11:27.435 20:53:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.435 20:53:55 -- common/autotest_common.sh@874 -- # size=4096 00:11:27.435 20:53:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.435 20:53:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:27.435 20:53:55 -- common/autotest_common.sh@877 -- # return 0 00:11:27.435 20:53:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.435 20:53:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.435 20:53:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:27.693 20:53:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:27.693 20:53:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:27.693 20:53:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:27.694 20:53:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:27.694 20:53:55 -- common/autotest_common.sh@857 -- # local i 00:11:27.694 20:53:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:27.694 20:53:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:27.694 20:53:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:27.952 20:53:55 -- common/autotest_common.sh@861 -- # break 00:11:27.952 20:53:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:27.952 20:53:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:27.952 20:53:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.952 1+0 records in 00:11:27.952 1+0 records out 00:11:27.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137054 s, 3.0 MB/s 00:11:27.952 20:53:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.952 20:53:55 -- common/autotest_common.sh@874 -- # size=4096 00:11:27.952 20:53:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.952 20:53:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:27.952 20:53:55 -- common/autotest_common.sh@877 -- # return 0 00:11:27.952 20:53:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.952 20:53:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.952 20:53:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:28.210 20:53:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:28.210 20:53:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:28.210 20:53:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:28.210 20:53:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:28.210 20:53:56 -- common/autotest_common.sh@857 -- # local i 00:11:28.210 20:53:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:28.210 20:53:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:28.210 20:53:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:28.210 20:53:56 -- common/autotest_common.sh@861 -- # break 00:11:28.210 20:53:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:28.210 20:53:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:28.210 20:53:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.210 1+0 records in 00:11:28.210 1+0 records out 00:11:28.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012935 s, 3.2 MB/s 00:11:28.210 20:53:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.210 20:53:56 -- common/autotest_common.sh@874 -- # size=4096 00:11:28.210 20:53:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.210 20:53:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:28.210 20:53:56 -- common/autotest_common.sh@877 -- # return 0 00:11:28.210 20:53:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.210 20:53:56 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.210 20:53:56 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd0", 00:11:28.471 "bdev_name": "Malloc0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd1", 00:11:28.471 "bdev_name": "Malloc1p0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd2", 00:11:28.471 "bdev_name": "Malloc1p1" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd3", 00:11:28.471 "bdev_name": "Malloc2p0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd4", 00:11:28.471 "bdev_name": "Malloc2p1" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd5", 00:11:28.471 "bdev_name": "Malloc2p2" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd6", 00:11:28.471 "bdev_name": "Malloc2p3" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd7", 00:11:28.471 "bdev_name": "Malloc2p4" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd8", 00:11:28.471 "bdev_name": "Malloc2p5" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd9", 00:11:28.471 "bdev_name": "Malloc2p6" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd10", 00:11:28.471 "bdev_name": "Malloc2p7" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd11", 00:11:28.471 "bdev_name": "TestPT" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd12", 00:11:28.471 "bdev_name": "raid0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd13", 00:11:28.471 "bdev_name": "concat0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd14", 00:11:28.471 "bdev_name": "raid1" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd15", 00:11:28.471 "bdev_name": "AIO0" 00:11:28.471 } 00:11:28.471 ]' 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd0", 00:11:28.471 "bdev_name": "Malloc0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd1", 00:11:28.471 "bdev_name": "Malloc1p0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd2", 00:11:28.471 "bdev_name": "Malloc1p1" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd3", 00:11:28.471 "bdev_name": "Malloc2p0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd4", 00:11:28.471 "bdev_name": "Malloc2p1" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd5", 00:11:28.471 "bdev_name": "Malloc2p2" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd6", 00:11:28.471 "bdev_name": "Malloc2p3" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd7", 00:11:28.471 "bdev_name": "Malloc2p4" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd8", 00:11:28.471 "bdev_name": "Malloc2p5" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd9", 00:11:28.471 "bdev_name": "Malloc2p6" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd10", 00:11:28.471 "bdev_name": "Malloc2p7" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd11", 00:11:28.471 "bdev_name": "TestPT" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd12", 00:11:28.471 "bdev_name": "raid0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd13", 00:11:28.471 "bdev_name": "concat0" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd14", 00:11:28.471 "bdev_name": "raid1" 00:11:28.471 }, 00:11:28.471 { 00:11:28.471 "nbd_device": "/dev/nbd15", 00:11:28.471 "bdev_name": "AIO0" 00:11:28.471 } 00:11:28.471 ]' 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@51 -- # local i 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.471 20:53:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@41 -- # break 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.730 20:53:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@41 -- # break 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.989 20:53:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:28.989 20:53:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:28.989 20:53:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:28.989 20:53:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:28.989 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.989 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.989 20:53:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:29.247 20:53:57 -- bdev/nbd_common.sh@41 -- # break 00:11:29.247 20:53:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.247 20:53:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.247 20:53:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:29.247 20:53:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@41 -- # break 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.506 20:53:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@41 -- # break 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@41 -- # break 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.765 20:53:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@41 -- # break 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@41 -- # break 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.345 20:53:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@41 -- # break 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.603 20:53:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@41 -- # break 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:30.862 20:53:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@41 -- # break 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.121 20:53:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@41 -- # break 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.380 20:53:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@41 -- # break 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.639 20:53:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@41 -- # break 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.899 20:53:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@41 -- # break 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.157 20:54:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@41 -- # break 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.416 20:54:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@65 -- # true 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@65 -- # count=0 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@122 -- # count=0 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@127 -- # return 0 00:11:32.675 20:54:00 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@12 -- # local i 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:32.675 20:54:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:32.933 /dev/nbd0 00:11:32.933 20:54:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:32.933 20:54:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:32.933 20:54:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:32.933 20:54:00 -- common/autotest_common.sh@857 -- # local i 00:11:32.933 20:54:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:32.933 20:54:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:32.933 20:54:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:32.933 20:54:00 -- common/autotest_common.sh@861 -- # break 00:11:32.933 20:54:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:32.933 20:54:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:32.933 20:54:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.933 1+0 records in 00:11:32.933 1+0 records out 00:11:32.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805295 s, 5.1 MB/s 00:11:32.933 20:54:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.933 20:54:00 -- common/autotest_common.sh@874 -- # size=4096 00:11:32.933 20:54:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.933 20:54:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:32.933 20:54:00 -- common/autotest_common.sh@877 -- # return 0 00:11:32.933 20:54:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.933 20:54:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:32.933 20:54:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:33.189 /dev/nbd1 00:11:33.189 20:54:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:33.189 20:54:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:33.189 20:54:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:33.189 20:54:01 -- common/autotest_common.sh@857 -- # local i 00:11:33.189 20:54:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:33.189 20:54:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:33.189 20:54:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:33.189 20:54:01 -- common/autotest_common.sh@861 -- # break 00:11:33.189 20:54:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:33.189 20:54:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:33.189 20:54:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.189 1+0 records in 00:11:33.189 1+0 records out 00:11:33.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540196 s, 7.6 MB/s 00:11:33.189 20:54:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.189 20:54:01 -- common/autotest_common.sh@874 -- # size=4096 00:11:33.189 20:54:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.189 20:54:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:33.189 20:54:01 -- common/autotest_common.sh@877 -- # return 0 00:11:33.189 20:54:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.189 20:54:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:33.189 20:54:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:33.447 /dev/nbd10 00:11:33.447 20:54:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:33.447 20:54:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:33.447 20:54:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:33.447 20:54:01 -- common/autotest_common.sh@857 -- # local i 00:11:33.447 20:54:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:33.447 20:54:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:33.447 20:54:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:33.447 20:54:01 -- common/autotest_common.sh@861 -- # break 00:11:33.447 20:54:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:33.447 20:54:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:33.447 20:54:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.447 1+0 records in 00:11:33.447 1+0 records out 00:11:33.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458092 s, 8.9 MB/s 00:11:33.447 20:54:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.447 20:54:01 -- common/autotest_common.sh@874 -- # size=4096 00:11:33.447 20:54:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.447 20:54:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:33.447 20:54:01 -- common/autotest_common.sh@877 -- # return 0 00:11:33.447 20:54:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.447 20:54:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:33.447 20:54:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:33.705 /dev/nbd11 00:11:33.705 20:54:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:33.705 20:54:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:33.705 20:54:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:33.705 20:54:01 -- common/autotest_common.sh@857 -- # local i 00:11:33.705 20:54:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:33.705 20:54:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:33.705 20:54:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:33.705 20:54:01 -- common/autotest_common.sh@861 -- # break 00:11:33.705 20:54:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:33.705 20:54:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:33.705 20:54:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.705 1+0 records in 00:11:33.705 1+0 records out 00:11:33.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549181 s, 7.5 MB/s 00:11:33.705 20:54:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.706 20:54:01 -- common/autotest_common.sh@874 -- # size=4096 00:11:33.706 20:54:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.706 20:54:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:33.706 20:54:01 -- common/autotest_common.sh@877 -- # return 0 00:11:33.706 20:54:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.706 20:54:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:33.706 20:54:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:33.964 /dev/nbd12 00:11:33.964 20:54:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:33.964 20:54:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:33.964 20:54:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:33.964 20:54:02 -- common/autotest_common.sh@857 -- # local i 00:11:33.964 20:54:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:33.964 20:54:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:33.964 20:54:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:33.964 20:54:02 -- common/autotest_common.sh@861 -- # break 00:11:33.964 20:54:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:33.964 20:54:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:33.964 20:54:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:33.964 1+0 records in 00:11:33.964 1+0 records out 00:11:33.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396352 s, 10.3 MB/s 00:11:33.964 20:54:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.964 20:54:02 -- common/autotest_common.sh@874 -- # size=4096 00:11:33.964 20:54:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:33.964 20:54:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:33.964 20:54:02 -- common/autotest_common.sh@877 -- # return 0 00:11:33.964 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:33.964 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:33.964 20:54:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:34.222 /dev/nbd13 00:11:34.222 20:54:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:34.222 20:54:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:34.222 20:54:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:34.222 20:54:02 -- common/autotest_common.sh@857 -- # local i 00:11:34.222 20:54:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:34.222 20:54:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:34.222 20:54:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:34.222 20:54:02 -- common/autotest_common.sh@861 -- # break 00:11:34.222 20:54:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:34.222 20:54:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:34.222 20:54:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.222 1+0 records in 00:11:34.222 1+0 records out 00:11:34.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600601 s, 6.8 MB/s 00:11:34.222 20:54:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.222 20:54:02 -- common/autotest_common.sh@874 -- # size=4096 00:11:34.222 20:54:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.222 20:54:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:34.222 20:54:02 -- common/autotest_common.sh@877 -- # return 0 00:11:34.222 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.222 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:34.222 20:54:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:34.480 /dev/nbd14 00:11:34.480 20:54:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:34.480 20:54:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:34.480 20:54:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:34.480 20:54:02 -- common/autotest_common.sh@857 -- # local i 00:11:34.480 20:54:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:34.480 20:54:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:34.480 20:54:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:34.480 20:54:02 -- common/autotest_common.sh@861 -- # break 00:11:34.480 20:54:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:34.480 20:54:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:34.480 20:54:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.480 1+0 records in 00:11:34.480 1+0 records out 00:11:34.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627403 s, 6.5 MB/s 00:11:34.480 20:54:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.480 20:54:02 -- common/autotest_common.sh@874 -- # size=4096 00:11:34.480 20:54:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.480 20:54:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:34.480 20:54:02 -- common/autotest_common.sh@877 -- # return 0 00:11:34.480 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.480 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:34.480 20:54:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:34.738 /dev/nbd15 00:11:34.738 20:54:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:34.738 20:54:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:34.738 20:54:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:34.738 20:54:02 -- common/autotest_common.sh@857 -- # local i 00:11:34.738 20:54:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:34.738 20:54:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:34.738 20:54:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:34.738 20:54:02 -- common/autotest_common.sh@861 -- # break 00:11:34.738 20:54:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:34.738 20:54:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:34.738 20:54:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.738 1+0 records in 00:11:34.738 1+0 records out 00:11:34.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530498 s, 7.7 MB/s 00:11:34.738 20:54:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.738 20:54:02 -- common/autotest_common.sh@874 -- # size=4096 00:11:34.738 20:54:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.738 20:54:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:34.738 20:54:02 -- common/autotest_common.sh@877 -- # return 0 00:11:34.738 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.738 20:54:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:34.738 20:54:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:34.997 /dev/nbd2 00:11:34.997 20:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:34.997 20:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:34.997 20:54:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:34.997 20:54:03 -- common/autotest_common.sh@857 -- # local i 00:11:34.997 20:54:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:34.997 20:54:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:34.997 20:54:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:34.997 20:54:03 -- common/autotest_common.sh@861 -- # break 00:11:34.997 20:54:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:34.997 20:54:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:34.997 20:54:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:34.997 1+0 records in 00:11:34.997 1+0 records out 00:11:34.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696846 s, 5.9 MB/s 00:11:34.997 20:54:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.997 20:54:03 -- common/autotest_common.sh@874 -- # size=4096 00:11:34.997 20:54:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:34.997 20:54:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:34.997 20:54:03 -- common/autotest_common.sh@877 -- # return 0 00:11:34.997 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:34.997 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:34.997 20:54:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:35.256 /dev/nbd3 00:11:35.256 20:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:35.256 20:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:35.256 20:54:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:35.256 20:54:03 -- common/autotest_common.sh@857 -- # local i 00:11:35.256 20:54:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:35.256 20:54:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:35.256 20:54:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:35.256 20:54:03 -- common/autotest_common.sh@861 -- # break 00:11:35.256 20:54:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:35.256 20:54:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:35.256 20:54:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.256 1+0 records in 00:11:35.256 1+0 records out 00:11:35.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535554 s, 7.6 MB/s 00:11:35.256 20:54:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.256 20:54:03 -- common/autotest_common.sh@874 -- # size=4096 00:11:35.256 20:54:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.256 20:54:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:35.256 20:54:03 -- common/autotest_common.sh@877 -- # return 0 00:11:35.256 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.256 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:35.256 20:54:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:35.515 /dev/nbd4 00:11:35.515 20:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:35.515 20:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:35.515 20:54:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:35.515 20:54:03 -- common/autotest_common.sh@857 -- # local i 00:11:35.515 20:54:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:35.515 20:54:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:35.515 20:54:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:35.515 20:54:03 -- common/autotest_common.sh@861 -- # break 00:11:35.515 20:54:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:35.515 20:54:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:35.515 20:54:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:35.515 1+0 records in 00:11:35.515 1+0 records out 00:11:35.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629689 s, 6.5 MB/s 00:11:35.515 20:54:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.515 20:54:03 -- common/autotest_common.sh@874 -- # size=4096 00:11:35.515 20:54:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:35.515 20:54:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:35.515 20:54:03 -- common/autotest_common.sh@877 -- # return 0 00:11:35.515 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:35.515 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:35.515 20:54:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:35.773 /dev/nbd5 00:11:35.773 20:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:35.773 20:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:35.773 20:54:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:36.031 20:54:03 -- common/autotest_common.sh@857 -- # local i 00:11:36.031 20:54:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:36.031 20:54:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:36.031 20:54:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:36.031 20:54:03 -- common/autotest_common.sh@861 -- # break 00:11:36.031 20:54:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:36.031 20:54:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:36.031 20:54:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.031 1+0 records in 00:11:36.031 1+0 records out 00:11:36.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579231 s, 7.1 MB/s 00:11:36.031 20:54:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.031 20:54:03 -- common/autotest_common.sh@874 -- # size=4096 00:11:36.031 20:54:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.031 20:54:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:36.031 20:54:03 -- common/autotest_common.sh@877 -- # return 0 00:11:36.031 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.031 20:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.031 20:54:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:36.031 /dev/nbd6 00:11:36.031 20:54:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:36.031 20:54:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:36.031 20:54:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:36.031 20:54:04 -- common/autotest_common.sh@857 -- # local i 00:11:36.031 20:54:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:36.031 20:54:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:36.032 20:54:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:36.032 20:54:04 -- common/autotest_common.sh@861 -- # break 00:11:36.032 20:54:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:36.032 20:54:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:36.032 20:54:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.032 1+0 records in 00:11:36.032 1+0 records out 00:11:36.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785047 s, 5.2 MB/s 00:11:36.032 20:54:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.032 20:54:04 -- common/autotest_common.sh@874 -- # size=4096 00:11:36.032 20:54:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.032 20:54:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:36.032 20:54:04 -- common/autotest_common.sh@877 -- # return 0 00:11:36.032 20:54:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.032 20:54:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.032 20:54:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:36.597 /dev/nbd7 00:11:36.597 20:54:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:36.597 20:54:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:36.597 20:54:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:36.598 20:54:04 -- common/autotest_common.sh@857 -- # local i 00:11:36.598 20:54:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:36.598 20:54:04 -- common/autotest_common.sh@861 -- # break 00:11:36.598 20:54:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.598 1+0 records in 00:11:36.598 1+0 records out 00:11:36.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813314 s, 5.0 MB/s 00:11:36.598 20:54:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.598 20:54:04 -- common/autotest_common.sh@874 -- # size=4096 00:11:36.598 20:54:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.598 20:54:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:36.598 20:54:04 -- common/autotest_common.sh@877 -- # return 0 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:36.598 /dev/nbd8 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:36.598 20:54:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:36.598 20:54:04 -- common/autotest_common.sh@857 -- # local i 00:11:36.598 20:54:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:36.598 20:54:04 -- common/autotest_common.sh@861 -- # break 00:11:36.598 20:54:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:36.598 20:54:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.598 1+0 records in 00:11:36.598 1+0 records out 00:11:36.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820071 s, 5.0 MB/s 00:11:36.598 20:54:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.598 20:54:04 -- common/autotest_common.sh@874 -- # size=4096 00:11:36.598 20:54:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.598 20:54:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:36.598 20:54:04 -- common/autotest_common.sh@877 -- # return 0 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.598 20:54:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:36.857 /dev/nbd9 00:11:37.116 20:54:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:37.116 20:54:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:37.116 20:54:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:37.116 20:54:05 -- common/autotest_common.sh@857 -- # local i 00:11:37.116 20:54:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:37.116 20:54:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:37.116 20:54:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:37.116 20:54:05 -- common/autotest_common.sh@861 -- # break 00:11:37.116 20:54:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:37.116 20:54:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:37.116 20:54:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.116 1+0 records in 00:11:37.116 1+0 records out 00:11:37.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131523 s, 3.1 MB/s 00:11:37.116 20:54:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.116 20:54:05 -- common/autotest_common.sh@874 -- # size=4096 00:11:37.116 20:54:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.116 20:54:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:37.116 20:54:05 -- common/autotest_common.sh@877 -- # return 0 00:11:37.116 20:54:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.116 20:54:05 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.116 20:54:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:37.116 20:54:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.116 20:54:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:37.375 20:54:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd0", 00:11:37.375 "bdev_name": "Malloc0" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd1", 00:11:37.375 "bdev_name": "Malloc1p0" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd10", 00:11:37.375 "bdev_name": "Malloc1p1" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd11", 00:11:37.375 "bdev_name": "Malloc2p0" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd12", 00:11:37.375 "bdev_name": "Malloc2p1" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd13", 00:11:37.375 "bdev_name": "Malloc2p2" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd14", 00:11:37.375 "bdev_name": "Malloc2p3" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd15", 00:11:37.375 "bdev_name": "Malloc2p4" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd2", 00:11:37.375 "bdev_name": "Malloc2p5" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd3", 00:11:37.375 "bdev_name": "Malloc2p6" 00:11:37.375 }, 00:11:37.375 { 00:11:37.375 "nbd_device": "/dev/nbd4", 00:11:37.375 "bdev_name": "Malloc2p7" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd5", 00:11:37.376 "bdev_name": "TestPT" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd6", 00:11:37.376 "bdev_name": "raid0" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd7", 00:11:37.376 "bdev_name": "concat0" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd8", 00:11:37.376 "bdev_name": "raid1" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd9", 00:11:37.376 "bdev_name": "AIO0" 00:11:37.376 } 00:11:37.376 ]' 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd0", 00:11:37.376 "bdev_name": "Malloc0" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd1", 00:11:37.376 "bdev_name": "Malloc1p0" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd10", 00:11:37.376 "bdev_name": "Malloc1p1" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd11", 00:11:37.376 "bdev_name": "Malloc2p0" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd12", 00:11:37.376 "bdev_name": "Malloc2p1" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd13", 00:11:37.376 "bdev_name": "Malloc2p2" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd14", 00:11:37.376 "bdev_name": "Malloc2p3" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd15", 00:11:37.376 "bdev_name": "Malloc2p4" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd2", 00:11:37.376 "bdev_name": "Malloc2p5" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd3", 00:11:37.376 "bdev_name": "Malloc2p6" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd4", 00:11:37.376 "bdev_name": "Malloc2p7" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd5", 00:11:37.376 "bdev_name": "TestPT" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd6", 00:11:37.376 "bdev_name": "raid0" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd7", 00:11:37.376 "bdev_name": "concat0" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd8", 00:11:37.376 "bdev_name": "raid1" 00:11:37.376 }, 00:11:37.376 { 00:11:37.376 "nbd_device": "/dev/nbd9", 00:11:37.376 "bdev_name": "AIO0" 00:11:37.376 } 00:11:37.376 ]' 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:37.376 /dev/nbd1 00:11:37.376 /dev/nbd10 00:11:37.376 /dev/nbd11 00:11:37.376 /dev/nbd12 00:11:37.376 /dev/nbd13 00:11:37.376 /dev/nbd14 00:11:37.376 /dev/nbd15 00:11:37.376 /dev/nbd2 00:11:37.376 /dev/nbd3 00:11:37.376 /dev/nbd4 00:11:37.376 /dev/nbd5 00:11:37.376 /dev/nbd6 00:11:37.376 /dev/nbd7 00:11:37.376 /dev/nbd8 00:11:37.376 /dev/nbd9' 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:37.376 /dev/nbd1 00:11:37.376 /dev/nbd10 00:11:37.376 /dev/nbd11 00:11:37.376 /dev/nbd12 00:11:37.376 /dev/nbd13 00:11:37.376 /dev/nbd14 00:11:37.376 /dev/nbd15 00:11:37.376 /dev/nbd2 00:11:37.376 /dev/nbd3 00:11:37.376 /dev/nbd4 00:11:37.376 /dev/nbd5 00:11:37.376 /dev/nbd6 00:11:37.376 /dev/nbd7 00:11:37.376 /dev/nbd8 00:11:37.376 /dev/nbd9' 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@65 -- # count=16 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@95 -- # count=16 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:37.376 256+0 records in 00:11:37.376 256+0 records out 00:11:37.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00748278 s, 140 MB/s 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:37.376 256+0 records in 00:11:37.376 256+0 records out 00:11:37.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120257 s, 8.7 MB/s 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.376 20:54:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:37.635 256+0 records in 00:11:37.635 256+0 records out 00:11:37.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142059 s, 7.4 MB/s 00:11:37.635 20:54:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.635 20:54:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:37.894 256+0 records in 00:11:37.894 256+0 records out 00:11:37.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1482 s, 7.1 MB/s 00:11:37.894 20:54:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.894 20:54:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:37.894 256+0 records in 00:11:37.894 256+0 records out 00:11:37.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192702 s, 5.4 MB/s 00:11:37.894 20:54:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:37.894 20:54:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:38.153 256+0 records in 00:11:38.153 256+0 records out 00:11:38.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141808 s, 7.4 MB/s 00:11:38.153 20:54:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:38.153 20:54:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:38.153 256+0 records in 00:11:38.153 256+0 records out 00:11:38.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14371 s, 7.3 MB/s 00:11:38.153 20:54:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:38.153 20:54:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:38.411 256+0 records in 00:11:38.411 256+0 records out 00:11:38.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145473 s, 7.2 MB/s 00:11:38.411 20:54:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:38.411 20:54:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:38.670 256+0 records in 00:11:38.670 256+0 records out 00:11:38.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141619 s, 7.4 MB/s 00:11:38.670 20:54:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:38.670 20:54:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:38.670 256+0 records in 00:11:38.670 256+0 records out 00:11:38.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14684 s, 7.1 MB/s 00:11:38.670 20:54:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:38.670 20:54:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:38.928 256+0 records in 00:11:38.928 256+0 records out 00:11:38.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143725 s, 7.3 MB/s 00:11:38.928 20:54:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:38.928 20:54:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:38.928 256+0 records in 00:11:38.928 256+0 records out 00:11:38.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142631 s, 7.4 MB/s 00:11:38.928 20:54:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:38.928 20:54:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:39.187 256+0 records in 00:11:39.187 256+0 records out 00:11:39.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14272 s, 7.3 MB/s 00:11:39.187 20:54:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.187 20:54:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:39.187 256+0 records in 00:11:39.187 256+0 records out 00:11:39.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149584 s, 7.0 MB/s 00:11:39.187 20:54:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.187 20:54:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:39.446 256+0 records in 00:11:39.446 256+0 records out 00:11:39.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147202 s, 7.1 MB/s 00:11:39.446 20:54:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.446 20:54:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:39.705 256+0 records in 00:11:39.705 256+0 records out 00:11:39.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147948 s, 7.1 MB/s 00:11:39.705 20:54:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.705 20:54:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:39.964 256+0 records in 00:11:39.964 256+0 records out 00:11:39.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234362 s, 4.5 MB/s 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@51 -- # local i 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.964 20:54:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@41 -- # break 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.223 20:54:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@41 -- # break 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.482 20:54:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@41 -- # break 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:40.741 20:54:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:41.000 20:54:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:41.000 20:54:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:41.000 20:54:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:41.000 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:41.000 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:41.000 20:54:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@41 -- # break 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@41 -- # break 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:41.258 20:54:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@41 -- # break 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:41.517 20:54:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@41 -- # break 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:41.776 20:54:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@41 -- # break 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.034 20:54:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@41 -- # break 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.293 20:54:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@41 -- # break 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.551 20:54:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@41 -- # break 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.810 20:54:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:43.068 20:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:43.068 20:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@41 -- # break 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.069 20:54:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@41 -- # break 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.327 20:54:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@41 -- # break 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.586 20:54:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@41 -- # break 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.845 20:54:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@41 -- # break 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.102 20:54:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@65 -- # true 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@65 -- # count=0 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@104 -- # count=0 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@109 -- # return 0 00:11:44.360 20:54:12 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:44.360 20:54:12 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:44.618 malloc_lvol_verify 00:11:44.618 20:54:12 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:44.876 370dbf11-f1e8-4497-905d-335b5459a73b 00:11:44.876 20:54:12 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:44.876 f7a79d8c-62c6-4bb7-ac04-a9cb694c4aac 00:11:44.876 20:54:13 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:45.134 /dev/nbd0 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:45.134 mke2fs 1.46.5 (30-Dec-2021) 00:11:45.134 00:11:45.134 Filesystem too small for a journal 00:11:45.134 Discarding device blocks: 0/1024 done 00:11:45.134 Creating filesystem with 1024 4k blocks and 1024 inodes 00:11:45.134 00:11:45.134 Allocating group tables: 0/1 done 00:11:45.134 Writing inode tables: 0/1 done 00:11:45.134 Writing superblocks and filesystem accounting information: 0/1 done 00:11:45.134 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@51 -- # local i 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.134 20:54:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@41 -- # break 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:45.393 20:54:13 -- bdev/nbd_common.sh@147 -- # return 0 00:11:45.393 20:54:13 -- bdev/blockdev.sh@324 -- # killprocess 108183 00:11:45.393 20:54:13 -- common/autotest_common.sh@926 -- # '[' -z 108183 ']' 00:11:45.393 20:54:13 -- common/autotest_common.sh@930 -- # kill -0 108183 00:11:45.393 20:54:13 -- common/autotest_common.sh@931 -- # uname 00:11:45.393 20:54:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:45.393 20:54:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108183 00:11:45.393 20:54:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:45.393 20:54:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:45.393 20:54:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108183' 00:11:45.393 killing process with pid 108183 00:11:45.393 20:54:13 -- common/autotest_common.sh@945 -- # kill 108183 00:11:45.393 20:54:13 -- common/autotest_common.sh@950 -- # wait 108183 00:11:47.295 ************************************ 00:11:47.295 END TEST bdev_nbd 00:11:47.295 ************************************ 00:11:47.295 20:54:15 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:47.295 00:11:47.295 real 0m25.390s 00:11:47.295 user 0m34.795s 00:11:47.295 sys 0m8.973s 00:11:47.295 20:54:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.295 20:54:15 -- common/autotest_common.sh@10 -- # set +x 00:11:47.295 20:54:15 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:47.295 20:54:15 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:11:47.295 20:54:15 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:11:47.295 20:54:15 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.295 20:54:15 -- common/autotest_common.sh@10 -- # set +x 00:11:47.295 ************************************ 00:11:47.295 START TEST bdev_fio 00:11:47.295 ************************************ 00:11:47.295 20:54:15 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:11:47.295 20:54:15 -- bdev/blockdev.sh@329 -- # local env_context 00:11:47.295 20:54:15 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:47.295 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:47.295 20:54:15 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:47.295 20:54:15 -- bdev/blockdev.sh@337 -- # echo '' 00:11:47.295 20:54:15 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:47.295 20:54:15 -- bdev/blockdev.sh@337 -- # env_context= 00:11:47.295 20:54:15 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:47.295 20:54:15 -- common/autotest_common.sh@1260 -- # local workload=verify 00:11:47.295 20:54:15 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:11:47.295 20:54:15 -- common/autotest_common.sh@1262 -- # local env_context= 00:11:47.295 20:54:15 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:11:47.295 20:54:15 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:47.295 20:54:15 -- common/autotest_common.sh@1280 -- # cat 00:11:47.295 20:54:15 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1293 -- # cat 00:11:47.295 20:54:15 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:11:47.295 20:54:15 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:11:47.295 20:54:15 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:47.295 20:54:15 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:11:47.295 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.295 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:11:47.295 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:11:47.295 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.295 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:11:47.296 20:54:15 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:47.296 20:54:15 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:11:47.296 20:54:15 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:47.296 20:54:15 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:47.296 20:54:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:47.296 20:54:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.296 20:54:15 -- common/autotest_common.sh@10 -- # set +x 00:11:47.296 ************************************ 00:11:47.296 START TEST bdev_fio_rw_verify 00:11:47.296 ************************************ 00:11:47.296 20:54:15 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:47.296 20:54:15 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:47.296 20:54:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:47.296 20:54:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:47.296 20:54:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:47.296 20:54:15 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:47.296 20:54:15 -- common/autotest_common.sh@1320 -- # shift 00:11:47.296 20:54:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:47.296 20:54:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:47.296 20:54:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:47.296 20:54:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:47.296 20:54:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:47.296 20:54:15 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:11:47.296 20:54:15 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:11:47.296 20:54:15 -- common/autotest_common.sh@1326 -- # break 00:11:47.296 20:54:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:47.296 20:54:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:47.554 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:47.554 fio-3.35 00:11:47.554 Starting 16 threads 00:11:59.761 00:11:59.761 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=109357: Sun Jun 9 20:54:27 2024 00:11:59.761 read: IOPS=66.0k, BW=258MiB/s (270MB/s)(2580MiB/10004msec) 00:11:59.761 slat (nsec): min=1988, max=37326k, avg=45727.20, stdev=519445.32 00:11:59.761 clat (usec): min=11, max=40337, avg=356.11, stdev=1434.56 00:11:59.761 lat (usec): min=29, max=40362, avg=401.83, stdev=1524.71 00:11:59.761 clat percentiles (usec): 00:11:59.761 | 50.000th=[ 212], 99.000th=[ 1139], 99.900th=[16581], 99.990th=[28443], 00:11:59.761 | 99.999th=[40109] 00:11:59.761 write: IOPS=105k, BW=412MiB/s (432MB/s)(4073MiB/9892msec); 0 zone resets 00:11:59.761 slat (usec): min=7, max=48055, avg=75.02, stdev=704.60 00:11:59.761 clat (usec): min=11, max=59844, avg=449.42, stdev=1681.27 00:11:59.761 lat (usec): min=41, max=59861, avg=524.44, stdev=1822.18 00:11:59.761 clat percentiles (usec): 00:11:59.761 | 50.000th=[ 265], 99.000th=[ 8586], 99.900th=[23462], 99.990th=[32637], 00:11:59.761 | 99.999th=[49546] 00:11:59.761 bw ( KiB/s): min=240704, max=676013, per=98.73%, avg=416266.32, stdev=8028.71, samples=304 00:11:59.761 iops : min=60176, max=169003, avg=104066.42, stdev=2007.17, samples=304 00:11:59.761 lat (usec) : 20=0.01%, 50=0.41%, 100=6.72%, 250=46.25%, 500=42.28% 00:11:59.761 lat (usec) : 750=2.59%, 1000=0.33% 00:11:59.761 lat (msec) : 2=0.21%, 4=0.08%, 10=0.22%, 20=0.77%, 50=0.12% 00:11:59.761 lat (msec) : 100=0.01% 00:11:59.761 cpu : usr=56.57%, sys=2.20%, ctx=225406, majf=2, minf=72340 00:11:59.761 IO depths : 1=11.2%, 2=23.5%, 4=52.1%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:59.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.761 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.761 issued rwts: total=660361,1042665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.761 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:59.761 00:11:59.761 Run status group 0 (all jobs): 00:11:59.761 READ: bw=258MiB/s (270MB/s), 258MiB/s-258MiB/s (270MB/s-270MB/s), io=2580MiB (2705MB), run=10004-10004msec 00:11:59.761 WRITE: bw=412MiB/s (432MB/s), 412MiB/s-412MiB/s (432MB/s-432MB/s), io=4073MiB (4271MB), run=9892-9892msec 00:12:01.259 ----------------------------------------------------- 00:12:01.259 Suppressions used: 00:12:01.259 count bytes template 00:12:01.259 16 140 /usr/src/fio/parse.c 00:12:01.259 10645 1021920 /usr/src/fio/iolog.c 00:12:01.259 1 904 libcrypto.so 00:12:01.259 ----------------------------------------------------- 00:12:01.259 00:12:01.259 ************************************ 00:12:01.259 END TEST bdev_fio_rw_verify 00:12:01.259 ************************************ 00:12:01.259 00:12:01.259 real 0m13.711s 00:12:01.259 user 1m35.896s 00:12:01.259 sys 0m4.384s 00:12:01.259 20:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.259 20:54:29 -- common/autotest_common.sh@10 -- # set +x 00:12:01.259 20:54:29 -- bdev/blockdev.sh@348 -- # rm -f 00:12:01.259 20:54:29 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:01.259 20:54:29 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:01.259 20:54:29 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:01.259 20:54:29 -- common/autotest_common.sh@1260 -- # local workload=trim 00:12:01.259 20:54:29 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:12:01.259 20:54:29 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:01.259 20:54:29 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:01.259 20:54:29 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:01.259 20:54:29 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:12:01.259 20:54:29 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:01.259 20:54:29 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:01.259 20:54:29 -- common/autotest_common.sh@1280 -- # cat 00:12:01.259 20:54:29 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:12:01.259 20:54:29 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:12:01.259 20:54:29 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:12:01.259 20:54:29 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:01.260 20:54:29 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ee0e517f-a178-49c5-bc6d-6d246f461f09"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee0e517f-a178-49c5-bc6d-6d246f461f09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "7fb35f0f-d07f-5d7b-a6db-670326d62e8c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7fb35f0f-d07f-5d7b-a6db-670326d62e8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "00a096ce-b601-5198-bac0-03cb2d3c6168"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "00a096ce-b601-5198-bac0-03cb2d3c6168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9ab8e275-a481-5ec7-9059-b7663e45052c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ab8e275-a481-5ec7-9059-b7663e45052c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "4585e9bc-a41b-59e1-bad8-eaea59dfcc03"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4585e9bc-a41b-59e1-bad8-eaea59dfcc03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "91e8ebf9-7d05-51ce-91de-fce4aa85611b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "91e8ebf9-7d05-51ce-91de-fce4aa85611b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f50c1585-de13-50eb-a77d-40eefa1b4f8f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f50c1585-de13-50eb-a77d-40eefa1b4f8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "1286f264-be42-5321-9331-295e73d95fbc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1286f264-be42-5321-9331-295e73d95fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "10ebd363-d479-572f-80c6-25306d724df2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10ebd363-d479-572f-80c6-25306d724df2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "b91f1683-797c-5387-b130-f25cf069c72a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b91f1683-797c-5387-b130-f25cf069c72a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a139230b-6ecb-536f-a4d4-4693523c7102"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a139230b-6ecb-536f-a4d4-4693523c7102",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "54e17f76-b807-5b26-b8ec-09dfd7a69569"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "54e17f76-b807-5b26-b8ec-09dfd7a69569",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e7d4792c-3475-42b4-a526-285f46ea2c1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e7d4792c-3475-42b4-a526-285f46ea2c1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e7d4792c-3475-42b4-a526-285f46ea2c1d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "20795a9e-f29a-4785-b3c6-6ee50b27aa7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "eae1837e-aee7-4800-9f4e-072b0b7f9972",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "a8806ac4-9b71-4a2b-aac1-190214a0d983"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a8806ac4-9b71-4a2b-aac1-190214a0d983",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a8806ac4-9b71-4a2b-aac1-190214a0d983",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6e873c7f-75c4-4556-8c41-d5168ed6a6e8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "fee382f2-cc85-4f5d-8f6a-e1cee23628dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "43f1c42e-6f61-4d1c-ac29-275cd75e5432",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "34fd1cbd-2758-46dc-9ff7-0478c563d70f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c5e97f87-d478-428b-be19-7cbef5b3c4b8"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c5e97f87-d478-428b-be19-7cbef5b3c4b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:01.260 20:54:29 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:01.260 Malloc1p0 00:12:01.260 Malloc1p1 00:12:01.260 Malloc2p0 00:12:01.260 Malloc2p1 00:12:01.260 Malloc2p2 00:12:01.260 Malloc2p3 00:12:01.260 Malloc2p4 00:12:01.260 Malloc2p5 00:12:01.260 Malloc2p6 00:12:01.260 Malloc2p7 00:12:01.260 TestPT 00:12:01.260 raid0 00:12:01.260 concat0 ]] 00:12:01.260 20:54:29 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ee0e517f-a178-49c5-bc6d-6d246f461f09"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ee0e517f-a178-49c5-bc6d-6d246f461f09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "7fb35f0f-d07f-5d7b-a6db-670326d62e8c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7fb35f0f-d07f-5d7b-a6db-670326d62e8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "00a096ce-b601-5198-bac0-03cb2d3c6168"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "00a096ce-b601-5198-bac0-03cb2d3c6168",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9ab8e275-a481-5ec7-9059-b7663e45052c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9ab8e275-a481-5ec7-9059-b7663e45052c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "4585e9bc-a41b-59e1-bad8-eaea59dfcc03"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4585e9bc-a41b-59e1-bad8-eaea59dfcc03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "91e8ebf9-7d05-51ce-91de-fce4aa85611b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "91e8ebf9-7d05-51ce-91de-fce4aa85611b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f50c1585-de13-50eb-a77d-40eefa1b4f8f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f50c1585-de13-50eb-a77d-40eefa1b4f8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "1286f264-be42-5321-9331-295e73d95fbc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1286f264-be42-5321-9331-295e73d95fbc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "10ebd363-d479-572f-80c6-25306d724df2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10ebd363-d479-572f-80c6-25306d724df2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "b91f1683-797c-5387-b130-f25cf069c72a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b91f1683-797c-5387-b130-f25cf069c72a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a139230b-6ecb-536f-a4d4-4693523c7102"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a139230b-6ecb-536f-a4d4-4693523c7102",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "54e17f76-b807-5b26-b8ec-09dfd7a69569"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "54e17f76-b807-5b26-b8ec-09dfd7a69569",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e7d4792c-3475-42b4-a526-285f46ea2c1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e7d4792c-3475-42b4-a526-285f46ea2c1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e7d4792c-3475-42b4-a526-285f46ea2c1d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "20795a9e-f29a-4785-b3c6-6ee50b27aa7e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "eae1837e-aee7-4800-9f4e-072b0b7f9972",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "a8806ac4-9b71-4a2b-aac1-190214a0d983"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a8806ac4-9b71-4a2b-aac1-190214a0d983",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a8806ac4-9b71-4a2b-aac1-190214a0d983",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6e873c7f-75c4-4556-8c41-d5168ed6a6e8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "fee382f2-cc85-4f5d-8f6a-e1cee23628dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0b7b466f-5d28-4529-a4ec-e3d0fd71d5d4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "43f1c42e-6f61-4d1c-ac29-275cd75e5432",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "34fd1cbd-2758-46dc-9ff7-0478c563d70f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c5e97f87-d478-428b-be19-7cbef5b3c4b8"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c5e97f87-d478-428b-be19-7cbef5b3c4b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:01.261 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:01.261 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.261 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:01.262 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:01.262 20:54:29 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:01.262 20:54:29 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:01.262 20:54:29 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:01.262 20:54:29 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.262 20:54:29 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:01.262 20:54:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:01.262 20:54:29 -- common/autotest_common.sh@10 -- # set +x 00:12:01.262 ************************************ 00:12:01.262 START TEST bdev_fio_trim 00:12:01.262 ************************************ 00:12:01.262 20:54:29 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.262 20:54:29 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.262 20:54:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:01.262 20:54:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:01.262 20:54:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:01.262 20:54:29 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:01.262 20:54:29 -- common/autotest_common.sh@1320 -- # shift 00:12:01.262 20:54:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:01.262 20:54:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:01.262 20:54:29 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:01.262 20:54:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:01.262 20:54:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:01.262 20:54:29 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:01.262 20:54:29 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:01.262 20:54:29 -- common/autotest_common.sh@1326 -- # break 00:12:01.262 20:54:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:01.262 20:54:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:01.520 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:01.520 fio-3.35 00:12:01.520 Starting 14 threads 00:12:13.723 00:12:13.723 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=109577: Sun Jun 9 20:54:40 2024 00:12:13.723 write: IOPS=114k, BW=445MiB/s (467MB/s)(4455MiB/10001msec); 0 zone resets 00:12:13.723 slat (usec): min=2, max=32202, avg=45.34, stdev=429.35 00:12:13.723 clat (usec): min=25, max=32530, avg=298.89, stdev=1133.50 00:12:13.723 lat (usec): min=34, max=32563, avg=344.23, stdev=1211.10 00:12:13.723 clat percentiles (usec): 00:12:13.723 | 50.000th=[ 204], 99.000th=[ 429], 99.900th=[16319], 99.990th=[20317], 00:12:13.723 | 99.999th=[28181] 00:12:13.723 bw ( KiB/s): min=316440, max=639272, per=99.95%, avg=455937.89, stdev=7821.22, samples=266 00:12:13.723 iops : min=79109, max=159818, avg=113983.95, stdev=1955.33, samples=266 00:12:13.723 trim: IOPS=114k, BW=445MiB/s (467MB/s)(4455MiB/10001msec); 0 zone resets 00:12:13.723 slat (usec): min=4, max=28029, avg=30.46, stdev=358.64 00:12:13.723 clat (usec): min=4, max=32563, avg=340.85, stdev=1200.10 00:12:13.723 lat (usec): min=14, max=32584, avg=371.31, stdev=1251.83 00:12:13.723 clat percentiles (usec): 00:12:13.723 | 50.000th=[ 237], 99.000th=[ 490], 99.900th=[16319], 99.990th=[20317], 00:12:13.723 | 99.999th=[28181] 00:12:13.723 bw ( KiB/s): min=316440, max=639272, per=99.95%, avg=455938.32, stdev=7821.16, samples=266 00:12:13.723 iops : min=79109, max=159818, avg=113984.05, stdev=1955.33, samples=266 00:12:13.723 lat (usec) : 10=0.01%, 20=0.01%, 50=0.24%, 100=3.79%, 250=58.18% 00:12:13.723 lat (usec) : 500=36.87%, 750=0.13%, 1000=0.05% 00:12:13.723 lat (msec) : 2=0.02%, 4=0.01%, 10=0.03%, 20=0.63%, 50=0.02% 00:12:13.723 cpu : usr=68.96%, sys=0.44%, ctx=159605, majf=0, minf=708 00:12:13.723 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:13.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.723 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.723 issued rwts: total=0,1140514,1140519,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.723 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:13.723 00:12:13.723 Run status group 0 (all jobs): 00:12:13.723 WRITE: bw=445MiB/s (467MB/s), 445MiB/s-445MiB/s (467MB/s-467MB/s), io=4455MiB (4672MB), run=10001-10001msec 00:12:13.724 TRIM: bw=445MiB/s (467MB/s), 445MiB/s-445MiB/s (467MB/s-467MB/s), io=4455MiB (4672MB), run=10001-10001msec 00:12:14.657 ----------------------------------------------------- 00:12:14.657 Suppressions used: 00:12:14.657 count bytes template 00:12:14.657 14 129 /usr/src/fio/parse.c 00:12:14.657 1 904 libcrypto.so 00:12:14.657 ----------------------------------------------------- 00:12:14.657 00:12:14.657 ************************************ 00:12:14.657 END TEST bdev_fio_trim 00:12:14.657 ************************************ 00:12:14.657 00:12:14.657 real 0m13.395s 00:12:14.657 user 1m41.507s 00:12:14.657 sys 0m1.412s 00:12:14.657 20:54:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.657 20:54:42 -- common/autotest_common.sh@10 -- # set +x 00:12:14.657 20:54:42 -- bdev/blockdev.sh@366 -- # rm -f 00:12:14.657 20:54:42 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:14.657 20:54:42 -- bdev/blockdev.sh@368 -- # popd 00:12:14.657 /home/vagrant/spdk_repo/spdk 00:12:14.657 20:54:42 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:14.657 00:12:14.657 real 0m27.454s 00:12:14.657 user 3m17.631s 00:12:14.657 sys 0m5.888s 00:12:14.657 20:54:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.657 20:54:42 -- common/autotest_common.sh@10 -- # set +x 00:12:14.657 ************************************ 00:12:14.657 END TEST bdev_fio 00:12:14.657 ************************************ 00:12:14.657 20:54:42 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:14.657 20:54:42 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:14.657 20:54:42 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:14.657 20:54:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:14.657 20:54:42 -- common/autotest_common.sh@10 -- # set +x 00:12:14.657 ************************************ 00:12:14.657 START TEST bdev_verify 00:12:14.657 ************************************ 00:12:14.657 20:54:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:14.915 [2024-06-09 20:54:42.914946] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:14.915 [2024-06-09 20:54:42.915746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109764 ] 00:12:14.915 [2024-06-09 20:54:43.089424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:15.173 [2024-06-09 20:54:43.270341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.173 [2024-06-09 20:54:43.270350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.740 [2024-06-09 20:54:43.614697] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:15.740 [2024-06-09 20:54:43.615220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:15.740 [2024-06-09 20:54:43.622685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:15.741 [2024-06-09 20:54:43.623085] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:15.741 [2024-06-09 20:54:43.630690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:15.741 [2024-06-09 20:54:43.631012] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:15.741 [2024-06-09 20:54:43.631277] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:15.741 [2024-06-09 20:54:43.801550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:15.741 [2024-06-09 20:54:43.802931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.741 [2024-06-09 20:54:43.803278] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:15.741 [2024-06-09 20:54:43.803526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.741 [2024-06-09 20:54:43.806082] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.741 [2024-06-09 20:54:43.806378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:16.307 Running I/O for 5 seconds... 00:12:21.601 00:12:21.601 Latency(us) 00:12:21.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.601 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.601 Verification LBA range: start 0x0 length 0x1000 00:12:21.601 Malloc0 : 5.19 1295.49 5.06 0.00 0.00 97328.86 1906.50 214481.45 00:12:21.601 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.601 Verification LBA range: start 0x1000 length 0x1000 00:12:21.601 Malloc0 : 5.24 1214.00 4.74 0.00 0.00 104698.36 2651.23 280255.77 00:12:21.601 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.601 Verification LBA range: start 0x0 length 0x800 00:12:21.601 Malloc1p0 : 5.19 911.49 3.56 0.00 0.00 138420.75 3753.43 130595.37 00:12:21.601 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.601 Verification LBA range: start 0x800 length 0x800 00:12:21.601 Malloc1p0 : 5.24 871.02 3.40 0.00 0.00 145909.61 3753.43 129642.12 00:12:21.601 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.601 Verification LBA range: start 0x0 length 0x800 00:12:21.601 Malloc1p1 : 5.20 911.23 3.56 0.00 0.00 138284.92 3634.27 129642.12 00:12:21.601 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.601 Verification LBA range: start 0x800 length 0x800 00:12:21.601 Malloc1p1 : 5.24 870.40 3.40 0.00 0.00 145769.61 3678.95 126782.37 00:12:21.601 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p0 : 5.23 921.66 3.60 0.00 0.00 137571.08 3649.16 129642.12 00:12:21.602 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p0 : 5.25 869.66 3.40 0.00 0.00 145652.38 3783.21 122969.37 00:12:21.602 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p1 : 5.23 921.42 3.60 0.00 0.00 137442.72 3783.21 127735.62 00:12:21.602 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p1 : 5.25 868.87 3.39 0.00 0.00 145541.86 3738.53 119632.99 00:12:21.602 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p2 : 5.23 921.18 3.60 0.00 0.00 137297.29 3589.59 124875.87 00:12:21.602 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p2 : 5.26 868.08 3.39 0.00 0.00 145425.62 3664.06 116296.61 00:12:21.602 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p3 : 5.24 920.95 3.60 0.00 0.00 137159.08 3470.43 122969.37 00:12:21.602 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p3 : 5.26 867.35 3.39 0.00 0.00 145338.20 3619.37 113913.48 00:12:21.602 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p4 : 5.24 920.68 3.60 0.00 0.00 137019.21 3589.59 122969.37 00:12:21.602 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p4 : 5.27 866.64 3.39 0.00 0.00 145198.94 3693.85 113913.48 00:12:21.602 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p5 : 5.24 920.22 3.59 0.00 0.00 136907.38 3470.43 122969.37 00:12:21.602 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p5 : 5.27 865.88 3.38 0.00 0.00 145064.39 3678.95 113913.48 00:12:21.602 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p6 : 5.24 919.58 3.59 0.00 0.00 136772.01 3530.01 124875.87 00:12:21.602 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p6 : 5.27 865.22 3.38 0.00 0.00 144950.68 3425.75 114866.73 00:12:21.602 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x200 00:12:21.602 Malloc2p7 : 5.25 918.81 3.59 0.00 0.00 136673.43 3470.43 124875.87 00:12:21.602 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x200 length 0x200 00:12:21.602 Malloc2p7 : 5.28 865.05 3.38 0.00 0.00 144815.88 3500.22 114866.73 00:12:21.602 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x1000 00:12:21.602 TestPT : 5.25 904.29 3.53 0.00 0.00 138586.55 6315.29 122969.37 00:12:21.602 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x1000 length 0x1000 00:12:21.602 TestPT : 5.28 832.68 3.25 0.00 0.00 150119.20 49330.73 166818.91 00:12:21.602 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x2000 00:12:21.602 raid0 : 5.26 917.16 3.58 0.00 0.00 136404.08 3395.96 125829.12 00:12:21.602 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x2000 length 0x2000 00:12:21.602 raid0 : 5.28 864.73 3.38 0.00 0.00 144490.22 3708.74 114390.11 00:12:21.602 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x2000 00:12:21.602 concat0 : 5.26 916.40 3.58 0.00 0.00 136294.98 3485.32 129642.12 00:12:21.602 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x2000 length 0x2000 00:12:21.602 concat0 : 5.28 864.57 3.38 0.00 0.00 144328.83 3708.74 114390.11 00:12:21.602 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x1000 00:12:21.602 raid1 : 5.27 915.63 3.58 0.00 0.00 136157.00 3991.74 133455.13 00:12:21.602 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x1000 length 0x1000 00:12:21.602 raid1 : 5.28 864.41 3.38 0.00 0.00 144132.47 4170.47 114390.11 00:12:21.602 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x0 length 0x4e2 00:12:21.602 AIO0 : 5.27 914.47 3.57 0.00 0.00 135932.03 9175.04 135361.63 00:12:21.602 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:21.602 Verification LBA range: start 0x4e2 length 0x4e2 00:12:21.602 AIO0 : 5.28 864.08 3.38 0.00 0.00 143796.36 8579.26 113436.86 00:12:21.602 =================================================================================================================== 00:12:21.602 Total : 29233.29 114.19 0.00 0.00 137699.97 1906.50 280255.77 00:12:23.507 ************************************ 00:12:23.507 END TEST bdev_verify 00:12:23.507 ************************************ 00:12:23.507 00:12:23.507 real 0m8.683s 00:12:23.507 user 0m15.272s 00:12:23.507 sys 0m0.654s 00:12:23.507 20:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.507 20:54:51 -- common/autotest_common.sh@10 -- # set +x 00:12:23.507 20:54:51 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:23.507 20:54:51 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:23.507 20:54:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.507 20:54:51 -- common/autotest_common.sh@10 -- # set +x 00:12:23.507 ************************************ 00:12:23.507 START TEST bdev_verify_big_io 00:12:23.507 ************************************ 00:12:23.507 20:54:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:23.507 [2024-06-09 20:54:51.636184] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:23.507 [2024-06-09 20:54:51.636694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109889 ] 00:12:23.766 [2024-06-09 20:54:51.807796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:24.025 [2024-06-09 20:54:52.006203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.025 [2024-06-09 20:54:52.006213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.283 [2024-06-09 20:54:52.389555] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.283 [2024-06-09 20:54:52.389950] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.283 [2024-06-09 20:54:52.397493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.283 [2024-06-09 20:54:52.397755] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.283 [2024-06-09 20:54:52.405560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.283 [2024-06-09 20:54:52.405748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:24.283 [2024-06-09 20:54:52.405897] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:24.540 [2024-06-09 20:54:52.589476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.541 [2024-06-09 20:54:52.589994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.541 [2024-06-09 20:54:52.590103] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:24.541 [2024-06-09 20:54:52.590540] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.541 [2024-06-09 20:54:52.593675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.541 [2024-06-09 20:54:52.593901] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:24.798 [2024-06-09 20:54:52.968065] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:24.798 [2024-06-09 20:54:52.973141] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:52.977958] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:52.982383] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:52.985860] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:52.990018] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:52.993396] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:52.997480] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:53.000925] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:53.004976] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:25.055 [2024-06-09 20:54:53.008488] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:25.056 [2024-06-09 20:54:53.012608] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:25.056 [2024-06-09 20:54:53.016055] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:25.056 [2024-06-09 20:54:53.020277] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:25.056 [2024-06-09 20:54:53.024381] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:25.056 [2024-06-09 20:54:53.027786] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:25.056 [2024-06-09 20:54:53.113977] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:25.056 [2024-06-09 20:54:53.120932] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:25.056 Running I/O for 5 seconds... 00:12:31.663 00:12:31.663 Latency(us) 00:12:31.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.663 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.663 Verification LBA range: start 0x0 length 0x100 00:12:31.663 Malloc0 : 5.39 466.35 29.15 0.00 0.00 266724.39 16324.42 865551.83 00:12:31.663 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.663 Verification LBA range: start 0x100 length 0x100 00:12:31.663 Malloc0 : 5.43 439.81 27.49 0.00 0.00 283844.97 14775.39 953250.91 00:12:31.663 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.663 Verification LBA range: start 0x0 length 0x80 00:12:31.663 Malloc1p0 : 5.48 341.34 21.33 0.00 0.00 358685.89 33602.09 777852.74 00:12:31.663 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.663 Verification LBA range: start 0x80 length 0x80 00:12:31.663 Malloc1p0 : 5.50 252.69 15.79 0.00 0.00 487537.74 32887.16 873177.83 00:12:31.663 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.663 Verification LBA range: start 0x0 length 0x80 00:12:31.663 Malloc1p1 : 5.62 158.22 9.89 0.00 0.00 771583.70 35746.91 1639591.56 00:12:31.663 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.663 Verification LBA range: start 0x80 length 0x80 00:12:31.663 Malloc1p1 : 5.71 145.14 9.07 0.00 0.00 827349.35 30504.03 1692973.61 00:12:31.664 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p0 : 5.48 90.49 5.66 0.00 0.00 335687.92 5689.72 486157.96 00:12:31.664 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p0 : 5.50 86.20 5.39 0.00 0.00 353462.02 5600.35 549072.52 00:12:31.664 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p1 : 5.48 90.47 5.65 0.00 0.00 334700.71 6255.71 474718.95 00:12:31.664 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p1 : 5.50 86.18 5.39 0.00 0.00 352443.82 5749.29 537633.51 00:12:31.664 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p2 : 5.48 90.45 5.65 0.00 0.00 333592.89 5421.61 465186.44 00:12:31.664 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p2 : 5.50 86.16 5.39 0.00 0.00 351237.89 5570.56 526194.50 00:12:31.664 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p3 : 5.48 90.43 5.65 0.00 0.00 332575.17 6285.50 451840.93 00:12:31.664 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p3 : 5.50 86.15 5.38 0.00 0.00 350124.98 5928.03 518568.49 00:12:31.664 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p4 : 5.49 90.41 5.65 0.00 0.00 331506.94 5451.40 442308.42 00:12:31.664 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p4 : 5.50 86.13 5.38 0.00 0.00 349068.76 5808.87 507129.48 00:12:31.664 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p5 : 5.52 93.26 5.83 0.00 0.00 321899.50 5659.93 430869.41 00:12:31.664 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p5 : 5.50 86.11 5.38 0.00 0.00 347846.74 6136.55 495690.47 00:12:31.664 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p6 : 5.52 93.24 5.83 0.00 0.00 320907.72 5034.36 421336.90 00:12:31.664 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p6 : 5.51 86.10 5.38 0.00 0.00 346671.83 6225.92 484251.46 00:12:31.664 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x20 00:12:31.664 Malloc2p7 : 5.52 93.22 5.83 0.00 0.00 319983.11 4498.15 411804.39 00:12:31.664 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x20 length 0x20 00:12:31.664 Malloc2p7 : 5.51 86.08 5.38 0.00 0.00 345505.53 5630.14 472812.45 00:12:31.664 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x100 00:12:31.664 TestPT : 5.64 158.22 9.89 0.00 0.00 743153.02 42896.29 1677721.60 00:12:31.664 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x100 length 0x100 00:12:31.664 TestPT : 5.72 145.88 9.12 0.00 0.00 798169.47 56718.43 1692973.61 00:12:31.664 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x200 00:12:31.664 raid0 : 5.65 163.32 10.21 0.00 0.00 713906.74 34317.03 1639591.56 00:12:31.664 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x200 length 0x200 00:12:31.664 raid0 : 5.72 155.47 9.72 0.00 0.00 745658.14 41704.73 1677721.60 00:12:31.664 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x200 00:12:31.664 concat0 : 5.60 170.58 10.66 0.00 0.00 678386.03 27644.28 1639591.56 00:12:31.664 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x200 length 0x200 00:12:31.664 concat0 : 5.70 161.85 10.12 0.00 0.00 706793.42 32887.16 1677721.60 00:12:31.664 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x100 00:12:31.664 raid1 : 5.69 178.83 11.18 0.00 0.00 637612.09 17873.45 1647217.57 00:12:31.664 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x100 length 0x100 00:12:31.664 raid1 : 5.70 183.90 11.49 0.00 0.00 617901.87 28955.00 1685347.61 00:12:31.664 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x0 length 0x4e 00:12:31.664 AIO0 : 5.68 190.39 11.90 0.00 0.00 363115.01 2085.24 899868.86 00:12:31.664 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:31.664 Verification LBA range: start 0x4e length 0x4e 00:12:31.664 AIO0 : 5.72 178.17 11.14 0.00 0.00 384165.29 2725.70 976128.93 00:12:31.664 =================================================================================================================== 00:12:31.664 Total : 4911.26 306.95 0.00 0.00 467307.12 2085.24 1692973.61 00:12:33.038 ************************************ 00:12:33.038 END TEST bdev_verify_big_io 00:12:33.038 ************************************ 00:12:33.038 00:12:33.038 real 0m9.513s 00:12:33.038 user 0m17.000s 00:12:33.038 sys 0m0.820s 00:12:33.038 20:55:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.038 20:55:01 -- common/autotest_common.sh@10 -- # set +x 00:12:33.038 20:55:01 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:33.038 20:55:01 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:33.038 20:55:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:33.038 20:55:01 -- common/autotest_common.sh@10 -- # set +x 00:12:33.038 ************************************ 00:12:33.038 START TEST bdev_write_zeroes 00:12:33.038 ************************************ 00:12:33.038 20:55:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:33.038 [2024-06-09 20:55:01.188847] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:33.038 [2024-06-09 20:55:01.189260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110033 ] 00:12:33.296 [2024-06-09 20:55:01.346452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.554 [2024-06-09 20:55:01.537225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.812 [2024-06-09 20:55:01.875468] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:33.812 [2024-06-09 20:55:01.876372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:33.812 [2024-06-09 20:55:01.883447] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:33.812 [2024-06-09 20:55:01.883698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:33.812 [2024-06-09 20:55:01.891466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:33.812 [2024-06-09 20:55:01.891685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:33.812 [2024-06-09 20:55:01.891820] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:34.070 [2024-06-09 20:55:02.104796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:34.070 [2024-06-09 20:55:02.105132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.070 [2024-06-09 20:55:02.105309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:34.070 [2024-06-09 20:55:02.105445] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.070 [2024-06-09 20:55:02.107966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.070 [2024-06-09 20:55:02.108182] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:34.328 Running I/O for 1 seconds... 00:12:35.702 00:12:35.702 Latency(us) 00:12:35.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.702 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc0 : 1.03 5854.54 22.87 0.00 0.00 21848.23 659.08 37891.72 00:12:35.702 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc1p0 : 1.03 5847.86 22.84 0.00 0.00 21837.96 848.99 36938.47 00:12:35.702 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc1p1 : 1.03 5840.89 22.82 0.00 0.00 21824.92 875.05 36223.53 00:12:35.702 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p0 : 1.03 5834.66 22.79 0.00 0.00 21809.17 848.99 35508.60 00:12:35.702 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p1 : 1.03 5828.18 22.77 0.00 0.00 21793.36 789.41 34793.66 00:12:35.702 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p2 : 1.03 5821.70 22.74 0.00 0.00 21770.18 867.61 33840.41 00:12:35.702 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p3 : 1.03 5815.34 22.72 0.00 0.00 21757.50 845.27 33125.47 00:12:35.702 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p4 : 1.05 5848.97 22.85 0.00 0.00 21594.04 837.82 32410.53 00:12:35.702 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p5 : 1.05 5843.19 22.82 0.00 0.00 21576.15 774.52 31695.59 00:12:35.702 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p6 : 1.05 5837.44 22.80 0.00 0.00 21558.66 808.03 30980.65 00:12:35.702 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 Malloc2p7 : 1.05 5831.62 22.78 0.00 0.00 21545.74 752.17 30146.56 00:12:35.702 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 TestPT : 1.05 5825.82 22.76 0.00 0.00 21528.90 811.75 29312.47 00:12:35.702 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 raid0 : 1.06 5819.22 22.73 0.00 0.00 21499.32 1362.85 28001.75 00:12:35.702 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 concat0 : 1.06 5812.28 22.70 0.00 0.00 21462.51 1325.61 26810.18 00:12:35.702 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 raid1 : 1.06 5804.09 22.67 0.00 0.00 21417.27 2174.60 24546.21 00:12:35.702 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.702 AIO0 : 1.06 5788.74 22.61 0.00 0.00 21373.66 1899.05 24784.52 00:12:35.702 =================================================================================================================== 00:12:35.702 Total : 93254.52 364.28 0.00 0.00 21635.83 659.08 37891.72 00:12:37.604 ************************************ 00:12:37.604 END TEST bdev_write_zeroes 00:12:37.604 ************************************ 00:12:37.604 00:12:37.604 real 0m4.149s 00:12:37.604 user 0m3.456s 00:12:37.604 sys 0m0.468s 00:12:37.604 20:55:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.604 20:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:37.604 20:55:05 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:37.604 20:55:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:37.604 20:55:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.604 20:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:37.604 ************************************ 00:12:37.604 START TEST bdev_json_nonenclosed 00:12:37.604 ************************************ 00:12:37.604 20:55:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:37.604 [2024-06-09 20:55:05.423117] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:37.604 [2024-06-09 20:55:05.423746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110105 ] 00:12:37.604 [2024-06-09 20:55:05.592855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.604 [2024-06-09 20:55:05.772280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.604 [2024-06-09 20:55:05.772822] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:37.604 [2024-06-09 20:55:05.772990] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:38.172 ************************************ 00:12:38.172 END TEST bdev_json_nonenclosed 00:12:38.172 ************************************ 00:12:38.172 00:12:38.172 real 0m0.768s 00:12:38.172 user 0m0.531s 00:12:38.172 sys 0m0.136s 00:12:38.172 20:55:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.172 20:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:38.172 20:55:06 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:38.172 20:55:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:38.172 20:55:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:38.172 20:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:38.172 ************************************ 00:12:38.172 START TEST bdev_json_nonarray 00:12:38.172 ************************************ 00:12:38.172 20:55:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:38.172 [2024-06-09 20:55:06.240140] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:38.172 [2024-06-09 20:55:06.240543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110140 ] 00:12:38.431 [2024-06-09 20:55:06.403797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.431 [2024-06-09 20:55:06.595987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.431 [2024-06-09 20:55:06.596418] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:38.431 [2024-06-09 20:55:06.596583] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:38.999 00:12:38.999 real 0m0.768s 00:12:38.999 user 0m0.554s 00:12:38.999 sys 0m0.112s 00:12:38.999 20:55:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.999 20:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:38.999 ************************************ 00:12:38.999 END TEST bdev_json_nonarray 00:12:38.999 ************************************ 00:12:38.999 20:55:06 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:12:38.999 20:55:06 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:12:38.999 20:55:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:38.999 20:55:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:38.999 20:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:38.999 ************************************ 00:12:38.999 START TEST bdev_qos 00:12:38.999 ************************************ 00:12:38.999 20:55:07 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:12:38.999 20:55:07 -- bdev/blockdev.sh@444 -- # QOS_PID=110171 00:12:38.999 20:55:07 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 110171' 00:12:38.999 20:55:07 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:38.999 Process qos testing pid: 110171 00:12:38.999 20:55:07 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:38.999 20:55:07 -- bdev/blockdev.sh@447 -- # waitforlisten 110171 00:12:38.999 20:55:07 -- common/autotest_common.sh@819 -- # '[' -z 110171 ']' 00:12:38.999 20:55:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.999 20:55:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:38.999 20:55:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.999 20:55:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:38.999 20:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:38.999 [2024-06-09 20:55:07.074882] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:38.999 [2024-06-09 20:55:07.075619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110171 ] 00:12:39.258 [2024-06-09 20:55:07.255880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.517 [2024-06-09 20:55:07.535459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.124 20:55:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:40.124 20:55:08 -- common/autotest_common.sh@852 -- # return 0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:40.124 20:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.124 20:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:40.124 Malloc_0 00:12:40.124 20:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.124 20:55:08 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:12:40.124 20:55:08 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:12:40.124 20:55:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:40.124 20:55:08 -- common/autotest_common.sh@889 -- # local i 00:12:40.124 20:55:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:40.124 20:55:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:40.124 20:55:08 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:40.124 20:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.124 20:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:40.124 20:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.124 20:55:08 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:40.124 20:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.124 20:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:40.124 [ 00:12:40.124 { 00:12:40.124 "name": "Malloc_0", 00:12:40.124 "aliases": [ 00:12:40.124 "82906e1e-c5d0-4fa9-9a3e-57b4e0bb46e2" 00:12:40.124 ], 00:12:40.124 "product_name": "Malloc disk", 00:12:40.124 "block_size": 512, 00:12:40.124 "num_blocks": 262144, 00:12:40.124 "uuid": "82906e1e-c5d0-4fa9-9a3e-57b4e0bb46e2", 00:12:40.124 "assigned_rate_limits": { 00:12:40.124 "rw_ios_per_sec": 0, 00:12:40.124 "rw_mbytes_per_sec": 0, 00:12:40.124 "r_mbytes_per_sec": 0, 00:12:40.124 "w_mbytes_per_sec": 0 00:12:40.124 }, 00:12:40.124 "claimed": false, 00:12:40.124 "zoned": false, 00:12:40.124 "supported_io_types": { 00:12:40.124 "read": true, 00:12:40.124 "write": true, 00:12:40.124 "unmap": true, 00:12:40.124 "write_zeroes": true, 00:12:40.124 "flush": true, 00:12:40.124 "reset": true, 00:12:40.124 "compare": false, 00:12:40.124 "compare_and_write": false, 00:12:40.124 "abort": true, 00:12:40.124 "nvme_admin": false, 00:12:40.124 "nvme_io": false 00:12:40.124 }, 00:12:40.124 "memory_domains": [ 00:12:40.124 { 00:12:40.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.124 "dma_device_type": 2 00:12:40.124 } 00:12:40.124 ], 00:12:40.124 "driver_specific": {} 00:12:40.124 } 00:12:40.124 ] 00:12:40.124 20:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.124 20:55:08 -- common/autotest_common.sh@895 -- # return 0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:40.124 20:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.124 20:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:40.124 Null_1 00:12:40.124 20:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.124 20:55:08 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:12:40.124 20:55:08 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:12:40.124 20:55:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:40.124 20:55:08 -- common/autotest_common.sh@889 -- # local i 00:12:40.124 20:55:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:40.124 20:55:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:40.124 20:55:08 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:40.124 20:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.124 20:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:40.124 20:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.124 20:55:08 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:40.124 20:55:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.124 20:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:40.124 [ 00:12:40.124 { 00:12:40.124 "name": "Null_1", 00:12:40.124 "aliases": [ 00:12:40.124 "a1a0f315-38e9-4a1f-bf17-a31b28b50cb5" 00:12:40.124 ], 00:12:40.124 "product_name": "Null disk", 00:12:40.124 "block_size": 512, 00:12:40.124 "num_blocks": 262144, 00:12:40.124 "uuid": "a1a0f315-38e9-4a1f-bf17-a31b28b50cb5", 00:12:40.124 "assigned_rate_limits": { 00:12:40.124 "rw_ios_per_sec": 0, 00:12:40.124 "rw_mbytes_per_sec": 0, 00:12:40.124 "r_mbytes_per_sec": 0, 00:12:40.124 "w_mbytes_per_sec": 0 00:12:40.124 }, 00:12:40.124 "claimed": false, 00:12:40.124 "zoned": false, 00:12:40.124 "supported_io_types": { 00:12:40.124 "read": true, 00:12:40.124 "write": true, 00:12:40.124 "unmap": false, 00:12:40.124 "write_zeroes": true, 00:12:40.124 "flush": false, 00:12:40.124 "reset": true, 00:12:40.124 "compare": false, 00:12:40.124 "compare_and_write": false, 00:12:40.124 "abort": true, 00:12:40.124 "nvme_admin": false, 00:12:40.124 "nvme_io": false 00:12:40.124 }, 00:12:40.124 "driver_specific": {} 00:12:40.124 } 00:12:40.124 ] 00:12:40.124 20:55:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.124 20:55:08 -- common/autotest_common.sh@895 -- # return 0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@455 -- # qos_function_test 00:12:40.124 20:55:08 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:12:40.124 20:55:08 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:40.124 20:55:08 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:12:40.124 20:55:08 -- bdev/blockdev.sh@410 -- # local io_result=0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:40.124 20:55:08 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:40.124 20:55:08 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:40.124 20:55:08 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:40.124 20:55:08 -- bdev/blockdev.sh@376 -- # tail -1 00:12:40.403 Running I/O for 60 seconds... 00:12:45.672 20:55:13 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 76084.52 304338.10 0.00 0.00 308224.00 0.00 0.00 ' 00:12:45.672 20:55:13 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:45.672 20:55:13 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:45.672 20:55:13 -- bdev/blockdev.sh@378 -- # iostat_result=76084.52 00:12:45.672 20:55:13 -- bdev/blockdev.sh@383 -- # echo 76084 00:12:45.672 20:55:13 -- bdev/blockdev.sh@414 -- # io_result=76084 00:12:45.672 20:55:13 -- bdev/blockdev.sh@416 -- # iops_limit=19000 00:12:45.672 20:55:13 -- bdev/blockdev.sh@417 -- # '[' 19000 -gt 1000 ']' 00:12:45.672 20:55:13 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 19000 Malloc_0 00:12:45.672 20:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.672 20:55:13 -- common/autotest_common.sh@10 -- # set +x 00:12:45.672 20:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.672 20:55:13 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 19000 IOPS Malloc_0 00:12:45.672 20:55:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:45.672 20:55:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.672 20:55:13 -- common/autotest_common.sh@10 -- # set +x 00:12:45.672 ************************************ 00:12:45.672 START TEST bdev_qos_iops 00:12:45.672 ************************************ 00:12:45.672 20:55:13 -- common/autotest_common.sh@1104 -- # run_qos_test 19000 IOPS Malloc_0 00:12:45.672 20:55:13 -- bdev/blockdev.sh@387 -- # local qos_limit=19000 00:12:45.672 20:55:13 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:45.672 20:55:13 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:12:45.672 20:55:13 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:45.672 20:55:13 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:45.672 20:55:13 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:45.672 20:55:13 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:45.672 20:55:13 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:45.672 20:55:13 -- bdev/blockdev.sh@376 -- # tail -1 00:12:50.943 20:55:18 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 18956.28 75825.12 0.00 0.00 76988.00 0.00 0.00 ' 00:12:50.943 20:55:18 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:50.943 20:55:18 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:50.943 20:55:18 -- bdev/blockdev.sh@378 -- # iostat_result=18956.28 00:12:50.943 20:55:18 -- bdev/blockdev.sh@383 -- # echo 18956 00:12:50.943 ************************************ 00:12:50.943 END TEST bdev_qos_iops 00:12:50.943 ************************************ 00:12:50.943 20:55:18 -- bdev/blockdev.sh@390 -- # qos_result=18956 00:12:50.943 20:55:18 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:12:50.943 20:55:18 -- bdev/blockdev.sh@394 -- # lower_limit=17100 00:12:50.943 20:55:18 -- bdev/blockdev.sh@395 -- # upper_limit=20900 00:12:50.943 20:55:18 -- bdev/blockdev.sh@398 -- # '[' 18956 -lt 17100 ']' 00:12:50.943 20:55:18 -- bdev/blockdev.sh@398 -- # '[' 18956 -gt 20900 ']' 00:12:50.943 00:12:50.943 real 0m5.214s 00:12:50.943 user 0m0.109s 00:12:50.943 sys 0m0.034s 00:12:50.943 20:55:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.943 20:55:18 -- common/autotest_common.sh@10 -- # set +x 00:12:50.943 20:55:18 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:12:50.943 20:55:18 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:50.943 20:55:18 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:50.943 20:55:18 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:50.943 20:55:18 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:50.943 20:55:18 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:50.943 20:55:18 -- bdev/blockdev.sh@376 -- # tail -1 00:12:56.211 20:55:23 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 30362.64 121450.57 0.00 0.00 122880.00 0.00 0.00 ' 00:12:56.211 20:55:23 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:56.211 20:55:23 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:56.211 20:55:23 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:56.211 20:55:23 -- bdev/blockdev.sh@380 -- # iostat_result=122880.00 00:12:56.211 20:55:23 -- bdev/blockdev.sh@383 -- # echo 122880 00:12:56.211 20:55:23 -- bdev/blockdev.sh@425 -- # bw_limit=122880 00:12:56.211 20:55:23 -- bdev/blockdev.sh@426 -- # bw_limit=12 00:12:56.211 20:55:23 -- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']' 00:12:56.211 20:55:23 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:12:56.211 20:55:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.211 20:55:23 -- common/autotest_common.sh@10 -- # set +x 00:12:56.211 20:55:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.211 20:55:23 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:12:56.211 20:55:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:56.211 20:55:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:56.211 20:55:23 -- common/autotest_common.sh@10 -- # set +x 00:12:56.211 ************************************ 00:12:56.211 START TEST bdev_qos_bw 00:12:56.211 ************************************ 00:12:56.211 20:55:23 -- common/autotest_common.sh@1104 -- # run_qos_test 12 BANDWIDTH Null_1 00:12:56.211 20:55:23 -- bdev/blockdev.sh@387 -- # local qos_limit=12 00:12:56.211 20:55:23 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:56.211 20:55:23 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:12:56.211 20:55:23 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:56.211 20:55:23 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:56.211 20:55:23 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:56.211 20:55:23 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:56.211 20:55:23 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:56.211 20:55:23 -- bdev/blockdev.sh@376 -- # tail -1 00:13:01.515 20:55:29 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3075.01 12300.05 0.00 0.00 12496.00 0.00 0.00 ' 00:13:01.515 20:55:29 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:01.515 20:55:29 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:01.515 20:55:29 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:01.515 20:55:29 -- bdev/blockdev.sh@380 -- # iostat_result=12496.00 00:13:01.515 20:55:29 -- bdev/blockdev.sh@383 -- # echo 12496 00:13:01.515 ************************************ 00:13:01.515 END TEST bdev_qos_bw 00:13:01.515 ************************************ 00:13:01.515 20:55:29 -- bdev/blockdev.sh@390 -- # qos_result=12496 00:13:01.515 20:55:29 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:01.515 20:55:29 -- bdev/blockdev.sh@392 -- # qos_limit=12288 00:13:01.515 20:55:29 -- bdev/blockdev.sh@394 -- # lower_limit=11059 00:13:01.515 20:55:29 -- bdev/blockdev.sh@395 -- # upper_limit=13516 00:13:01.515 20:55:29 -- bdev/blockdev.sh@398 -- # '[' 12496 -lt 11059 ']' 00:13:01.515 20:55:29 -- bdev/blockdev.sh@398 -- # '[' 12496 -gt 13516 ']' 00:13:01.515 00:13:01.515 real 0m5.229s 00:13:01.515 user 0m0.110s 00:13:01.515 sys 0m0.031s 00:13:01.515 20:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.515 20:55:29 -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 20:55:29 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:01.515 20:55:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.515 20:55:29 -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 20:55:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.515 20:55:29 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:01.515 20:55:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:01.515 20:55:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.515 20:55:29 -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 ************************************ 00:13:01.515 START TEST bdev_qos_ro_bw 00:13:01.515 ************************************ 00:13:01.515 20:55:29 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:01.515 20:55:29 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:01.515 20:55:29 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:01.515 20:55:29 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:01.515 20:55:29 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:01.516 20:55:29 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:01.516 20:55:29 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:01.516 20:55:29 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:01.516 20:55:29 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:01.516 20:55:29 -- bdev/blockdev.sh@376 -- # tail -1 00:13:06.786 20:55:34 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.41 2045.66 0.00 0.00 2060.00 0.00 0.00 ' 00:13:06.786 20:55:34 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:06.786 20:55:34 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:06.786 20:55:34 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:06.786 20:55:34 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:13:06.786 20:55:34 -- bdev/blockdev.sh@383 -- # echo 2060 00:13:06.786 ************************************ 00:13:06.786 END TEST bdev_qos_ro_bw 00:13:06.786 ************************************ 00:13:06.786 20:55:34 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:13:06.786 20:55:34 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:06.786 20:55:34 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:06.786 20:55:34 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:06.786 20:55:34 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:06.786 20:55:34 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:13:06.786 20:55:34 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:13:06.786 00:13:06.786 real 0m5.172s 00:13:06.786 user 0m0.101s 00:13:06.786 sys 0m0.046s 00:13:06.786 20:55:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.786 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:13:06.786 20:55:34 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:06.786 20:55:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.786 20:55:34 -- common/autotest_common.sh@10 -- # set +x 00:13:07.044 20:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.044 20:55:35 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:07.044 20:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.044 20:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.044 00:13:07.044 Latency(us) 00:13:07.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.044 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:07.044 Malloc_0 : 26.70 26434.44 103.26 0.00 0.00 9594.69 1921.40 503316.48 00:13:07.044 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:07.045 Null_1 : 26.88 28020.37 109.45 0.00 0.00 9117.40 647.91 176351.42 00:13:07.045 =================================================================================================================== 00:13:07.045 Total : 54454.81 212.71 0.00 0.00 9348.31 647.91 503316.48 00:13:07.045 0 00:13:07.045 20:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.045 20:55:35 -- bdev/blockdev.sh@459 -- # killprocess 110171 00:13:07.045 20:55:35 -- common/autotest_common.sh@926 -- # '[' -z 110171 ']' 00:13:07.045 20:55:35 -- common/autotest_common.sh@930 -- # kill -0 110171 00:13:07.045 20:55:35 -- common/autotest_common.sh@931 -- # uname 00:13:07.045 20:55:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:07.045 20:55:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110171 00:13:07.302 20:55:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:07.303 20:55:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:07.303 20:55:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110171' 00:13:07.303 killing process with pid 110171 00:13:07.303 20:55:35 -- common/autotest_common.sh@945 -- # kill 110171 00:13:07.303 Received shutdown signal, test time was about 26.916474 seconds 00:13:07.303 00:13:07.303 Latency(us) 00:13:07.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.303 =================================================================================================================== 00:13:07.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.303 20:55:35 -- common/autotest_common.sh@950 -- # wait 110171 00:13:08.267 20:55:36 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:08.267 00:13:08.267 real 0m29.357s 00:13:08.267 user 0m30.019s 00:13:08.267 sys 0m0.713s 00:13:08.267 20:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.267 20:55:36 -- common/autotest_common.sh@10 -- # set +x 00:13:08.267 ************************************ 00:13:08.267 END TEST bdev_qos 00:13:08.267 ************************************ 00:13:08.267 20:55:36 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:08.267 20:55:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:08.267 20:55:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.267 20:55:36 -- common/autotest_common.sh@10 -- # set +x 00:13:08.267 ************************************ 00:13:08.267 START TEST bdev_qd_sampling 00:13:08.267 ************************************ 00:13:08.267 20:55:36 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:13:08.267 20:55:36 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:08.267 20:55:36 -- bdev/blockdev.sh@539 -- # QD_PID=110649 00:13:08.267 20:55:36 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:08.267 20:55:36 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 110649' 00:13:08.267 Process bdev QD sampling period testing pid: 110649 00:13:08.267 20:55:36 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:08.267 20:55:36 -- bdev/blockdev.sh@542 -- # waitforlisten 110649 00:13:08.267 20:55:36 -- common/autotest_common.sh@819 -- # '[' -z 110649 ']' 00:13:08.267 20:55:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.267 20:55:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:08.267 20:55:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.268 20:55:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:08.268 20:55:36 -- common/autotest_common.sh@10 -- # set +x 00:13:08.526 [2024-06-09 20:55:36.469107] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:08.526 [2024-06-09 20:55:36.469480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110649 ] 00:13:08.526 [2024-06-09 20:55:36.643121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.784 [2024-06-09 20:55:36.875078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.784 [2024-06-09 20:55:36.875090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.350 20:55:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:09.350 20:55:37 -- common/autotest_common.sh@852 -- # return 0 00:13:09.350 20:55:37 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:09.350 20:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.350 20:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:09.609 Malloc_QD 00:13:09.609 20:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.609 20:55:37 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:09.609 20:55:37 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:13:09.609 20:55:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:09.609 20:55:37 -- common/autotest_common.sh@889 -- # local i 00:13:09.609 20:55:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:09.609 20:55:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:09.609 20:55:37 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:09.609 20:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.609 20:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:09.609 20:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.609 20:55:37 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:09.609 20:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.609 20:55:37 -- common/autotest_common.sh@10 -- # set +x 00:13:09.609 [ 00:13:09.609 { 00:13:09.609 "name": "Malloc_QD", 00:13:09.609 "aliases": [ 00:13:09.609 "57874684-dd1d-4fac-9756-0367facdfcdf" 00:13:09.609 ], 00:13:09.609 "product_name": "Malloc disk", 00:13:09.609 "block_size": 512, 00:13:09.609 "num_blocks": 262144, 00:13:09.609 "uuid": "57874684-dd1d-4fac-9756-0367facdfcdf", 00:13:09.609 "assigned_rate_limits": { 00:13:09.609 "rw_ios_per_sec": 0, 00:13:09.609 "rw_mbytes_per_sec": 0, 00:13:09.609 "r_mbytes_per_sec": 0, 00:13:09.609 "w_mbytes_per_sec": 0 00:13:09.609 }, 00:13:09.609 "claimed": false, 00:13:09.609 "zoned": false, 00:13:09.609 "supported_io_types": { 00:13:09.609 "read": true, 00:13:09.609 "write": true, 00:13:09.609 "unmap": true, 00:13:09.609 "write_zeroes": true, 00:13:09.609 "flush": true, 00:13:09.609 "reset": true, 00:13:09.609 "compare": false, 00:13:09.609 "compare_and_write": false, 00:13:09.609 "abort": true, 00:13:09.609 "nvme_admin": false, 00:13:09.609 "nvme_io": false 00:13:09.609 }, 00:13:09.609 "memory_domains": [ 00:13:09.609 { 00:13:09.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.609 "dma_device_type": 2 00:13:09.609 } 00:13:09.609 ], 00:13:09.609 "driver_specific": {} 00:13:09.609 } 00:13:09.609 ] 00:13:09.609 20:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.609 20:55:37 -- common/autotest_common.sh@895 -- # return 0 00:13:09.609 20:55:37 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:09.609 20:55:37 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.609 Running I/O for 5 seconds... 00:13:11.513 20:55:39 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:11.513 20:55:39 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:11.513 20:55:39 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:11.513 20:55:39 -- bdev/blockdev.sh@519 -- # local iostats 00:13:11.513 20:55:39 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:11.513 20:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.513 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:11.513 20:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.513 20:55:39 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:11.513 20:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.513 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:11.513 20:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.513 20:55:39 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:11.513 "tick_rate": 2200000000, 00:13:11.513 "ticks": 1536633915260, 00:13:11.513 "bdevs": [ 00:13:11.513 { 00:13:11.513 "name": "Malloc_QD", 00:13:11.513 "bytes_read": 952144384, 00:13:11.513 "num_read_ops": 232451, 00:13:11.513 "bytes_written": 0, 00:13:11.513 "num_write_ops": 0, 00:13:11.513 "bytes_unmapped": 0, 00:13:11.513 "num_unmap_ops": 0, 00:13:11.513 "bytes_copied": 0, 00:13:11.513 "num_copy_ops": 0, 00:13:11.513 "read_latency_ticks": 2148951467512, 00:13:11.513 "max_read_latency_ticks": 11670474, 00:13:11.513 "min_read_latency_ticks": 336768, 00:13:11.513 "write_latency_ticks": 0, 00:13:11.513 "max_write_latency_ticks": 0, 00:13:11.513 "min_write_latency_ticks": 0, 00:13:11.513 "unmap_latency_ticks": 0, 00:13:11.513 "max_unmap_latency_ticks": 0, 00:13:11.513 "min_unmap_latency_ticks": 0, 00:13:11.513 "copy_latency_ticks": 0, 00:13:11.513 "max_copy_latency_ticks": 0, 00:13:11.513 "min_copy_latency_ticks": 0, 00:13:11.513 "io_error": {}, 00:13:11.513 "queue_depth_polling_period": 10, 00:13:11.513 "queue_depth": 512, 00:13:11.513 "io_time": 30, 00:13:11.513 "weighted_io_time": 15360 00:13:11.513 } 00:13:11.513 ] 00:13:11.513 }' 00:13:11.513 20:55:39 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:11.513 20:55:39 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:11.513 20:55:39 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:11.513 20:55:39 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:11.513 20:55:39 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:11.513 20:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.513 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:11.513 00:13:11.513 Latency(us) 00:13:11.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.513 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:11.513 Malloc_QD : 1.99 60451.50 236.14 0.00 0.00 4224.85 1094.75 5451.40 00:13:11.513 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:11.513 Malloc_QD : 1.99 60946.49 238.07 0.00 0.00 4190.26 651.64 5332.25 00:13:11.513 =================================================================================================================== 00:13:11.513 Total : 121397.99 474.21 0.00 0.00 4207.48 651.64 5451.40 00:13:11.771 0 00:13:11.771 20:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.771 20:55:39 -- bdev/blockdev.sh@552 -- # killprocess 110649 00:13:11.771 20:55:39 -- common/autotest_common.sh@926 -- # '[' -z 110649 ']' 00:13:11.771 20:55:39 -- common/autotest_common.sh@930 -- # kill -0 110649 00:13:11.771 20:55:39 -- common/autotest_common.sh@931 -- # uname 00:13:11.771 20:55:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:11.771 20:55:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110649 00:13:11.771 20:55:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:11.771 20:55:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:11.771 20:55:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110649' 00:13:11.771 killing process with pid 110649 00:13:11.771 20:55:39 -- common/autotest_common.sh@945 -- # kill 110649 00:13:11.771 Received shutdown signal, test time was about 2.110805 seconds 00:13:11.771 00:13:11.771 Latency(us) 00:13:11.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.771 =================================================================================================================== 00:13:11.771 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.771 20:55:39 -- common/autotest_common.sh@950 -- # wait 110649 00:13:13.148 20:55:40 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:13.148 00:13:13.148 real 0m4.508s 00:13:13.148 user 0m8.387s 00:13:13.148 sys 0m0.395s 00:13:13.148 20:55:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.148 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:13.148 ************************************ 00:13:13.148 END TEST bdev_qd_sampling 00:13:13.148 ************************************ 00:13:13.148 20:55:40 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:13.148 20:55:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:13.148 20:55:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:13.148 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:13.148 ************************************ 00:13:13.148 START TEST bdev_error 00:13:13.148 ************************************ 00:13:13.148 20:55:40 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:13:13.148 20:55:40 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:13.148 20:55:40 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:13.148 20:55:40 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:13.148 20:55:40 -- bdev/blockdev.sh@470 -- # ERR_PID=110751 00:13:13.148 Process error testing pid: 110751 00:13:13.148 20:55:40 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 110751' 00:13:13.148 20:55:40 -- bdev/blockdev.sh@472 -- # waitforlisten 110751 00:13:13.148 20:55:40 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:13.148 20:55:40 -- common/autotest_common.sh@819 -- # '[' -z 110751 ']' 00:13:13.148 20:55:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.148 20:55:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:13.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.148 20:55:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.148 20:55:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:13.148 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:13.148 [2024-06-09 20:55:41.038817] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:13.148 [2024-06-09 20:55:41.039009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110751 ] 00:13:13.148 [2024-06-09 20:55:41.208444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.407 [2024-06-09 20:55:41.372072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.975 20:55:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:13.975 20:55:41 -- common/autotest_common.sh@852 -- # return 0 00:13:13.975 20:55:41 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:13.975 20:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.975 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:13:13.975 Dev_1 00:13:13.975 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.975 20:55:42 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:13.975 20:55:42 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:13.975 20:55:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:13.975 20:55:42 -- common/autotest_common.sh@889 -- # local i 00:13:13.975 20:55:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:13.975 20:55:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:13.975 20:55:42 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:13.975 20:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.975 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:13.975 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.975 20:55:42 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:13.975 20:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.975 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:13.975 [ 00:13:13.975 { 00:13:13.975 "name": "Dev_1", 00:13:13.975 "aliases": [ 00:13:13.975 "390952cc-15ee-40c2-870e-fa49eba894e6" 00:13:13.975 ], 00:13:13.975 "product_name": "Malloc disk", 00:13:13.975 "block_size": 512, 00:13:13.975 "num_blocks": 262144, 00:13:13.975 "uuid": "390952cc-15ee-40c2-870e-fa49eba894e6", 00:13:13.975 "assigned_rate_limits": { 00:13:13.975 "rw_ios_per_sec": 0, 00:13:13.975 "rw_mbytes_per_sec": 0, 00:13:13.975 "r_mbytes_per_sec": 0, 00:13:13.975 "w_mbytes_per_sec": 0 00:13:13.975 }, 00:13:13.975 "claimed": false, 00:13:13.975 "zoned": false, 00:13:13.975 "supported_io_types": { 00:13:13.975 "read": true, 00:13:13.975 "write": true, 00:13:13.975 "unmap": true, 00:13:13.975 "write_zeroes": true, 00:13:13.975 "flush": true, 00:13:13.975 "reset": true, 00:13:13.975 "compare": false, 00:13:13.975 "compare_and_write": false, 00:13:13.975 "abort": true, 00:13:13.975 "nvme_admin": false, 00:13:13.975 "nvme_io": false 00:13:13.975 }, 00:13:13.975 "memory_domains": [ 00:13:13.975 { 00:13:13.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.975 "dma_device_type": 2 00:13:13.975 } 00:13:13.975 ], 00:13:13.975 "driver_specific": {} 00:13:13.975 } 00:13:13.975 ] 00:13:13.975 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.975 20:55:42 -- common/autotest_common.sh@895 -- # return 0 00:13:13.975 20:55:42 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:13.975 20:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.975 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:13.975 true 00:13:13.975 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.975 20:55:42 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:13.975 20:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.975 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.233 Dev_2 00:13:14.233 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.233 20:55:42 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:14.233 20:55:42 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:14.233 20:55:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:14.233 20:55:42 -- common/autotest_common.sh@889 -- # local i 00:13:14.233 20:55:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:14.233 20:55:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:14.233 20:55:42 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:14.233 20:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.233 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.233 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.233 20:55:42 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:14.233 20:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.233 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.233 [ 00:13:14.233 { 00:13:14.233 "name": "Dev_2", 00:13:14.233 "aliases": [ 00:13:14.233 "005ac34a-1cf1-407e-b71c-28ab201fe4b7" 00:13:14.233 ], 00:13:14.233 "product_name": "Malloc disk", 00:13:14.233 "block_size": 512, 00:13:14.233 "num_blocks": 262144, 00:13:14.233 "uuid": "005ac34a-1cf1-407e-b71c-28ab201fe4b7", 00:13:14.233 "assigned_rate_limits": { 00:13:14.233 "rw_ios_per_sec": 0, 00:13:14.233 "rw_mbytes_per_sec": 0, 00:13:14.233 "r_mbytes_per_sec": 0, 00:13:14.233 "w_mbytes_per_sec": 0 00:13:14.233 }, 00:13:14.233 "claimed": false, 00:13:14.233 "zoned": false, 00:13:14.233 "supported_io_types": { 00:13:14.233 "read": true, 00:13:14.233 "write": true, 00:13:14.233 "unmap": true, 00:13:14.233 "write_zeroes": true, 00:13:14.233 "flush": true, 00:13:14.233 "reset": true, 00:13:14.234 "compare": false, 00:13:14.234 "compare_and_write": false, 00:13:14.234 "abort": true, 00:13:14.234 "nvme_admin": false, 00:13:14.234 "nvme_io": false 00:13:14.234 }, 00:13:14.234 "memory_domains": [ 00:13:14.234 { 00:13:14.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.234 "dma_device_type": 2 00:13:14.234 } 00:13:14.234 ], 00:13:14.234 "driver_specific": {} 00:13:14.234 } 00:13:14.234 ] 00:13:14.234 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.234 20:55:42 -- common/autotest_common.sh@895 -- # return 0 00:13:14.234 20:55:42 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:14.234 20:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.234 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.234 20:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.234 20:55:42 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:14.234 20:55:42 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:14.234 Running I/O for 5 seconds... 00:13:15.169 20:55:43 -- bdev/blockdev.sh@485 -- # kill -0 110751 00:13:15.169 20:55:43 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 110751' 00:13:15.169 Process is existed as continue on error is set. Pid: 110751 00:13:15.169 20:55:43 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:15.169 20:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.169 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:13:15.169 20:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.169 20:55:43 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:15.169 20:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.169 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:13:15.427 Timeout while waiting for response: 00:13:15.427 00:13:15.427 00:13:15.685 20:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.685 20:55:43 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:19.871 00:13:19.871 Latency(us) 00:13:19.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.871 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:19.871 EE_Dev_1 : 0.90 38624.99 150.88 5.55 0.00 411.20 141.50 1057.51 00:13:19.871 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:19.871 Dev_2 : 5.00 76290.82 298.01 0.00 0.00 206.76 65.16 394645.88 00:13:19.871 =================================================================================================================== 00:13:19.871 Total : 114915.81 448.89 5.55 0.00 223.86 65.16 394645.88 00:13:20.807 20:55:48 -- bdev/blockdev.sh@497 -- # killprocess 110751 00:13:20.807 20:55:48 -- common/autotest_common.sh@926 -- # '[' -z 110751 ']' 00:13:20.807 20:55:48 -- common/autotest_common.sh@930 -- # kill -0 110751 00:13:20.807 20:55:48 -- common/autotest_common.sh@931 -- # uname 00:13:20.807 20:55:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:20.807 20:55:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110751 00:13:20.807 20:55:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:20.807 20:55:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:20.807 20:55:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110751' 00:13:20.807 killing process with pid 110751 00:13:20.807 Received shutdown signal, test time was about 5.000000 seconds 00:13:20.807 00:13:20.807 Latency(us) 00:13:20.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.807 =================================================================================================================== 00:13:20.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:20.807 20:55:48 -- common/autotest_common.sh@945 -- # kill 110751 00:13:20.807 20:55:48 -- common/autotest_common.sh@950 -- # wait 110751 00:13:22.182 20:55:50 -- bdev/blockdev.sh@501 -- # ERR_PID=110872 00:13:22.182 20:55:50 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:22.182 Process error testing pid: 110872 00:13:22.182 20:55:50 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 110872' 00:13:22.182 20:55:50 -- bdev/blockdev.sh@503 -- # waitforlisten 110872 00:13:22.182 20:55:50 -- common/autotest_common.sh@819 -- # '[' -z 110872 ']' 00:13:22.182 20:55:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.182 20:55:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:22.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.182 20:55:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.182 20:55:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:22.182 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:13:22.182 [2024-06-09 20:55:50.160643] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:22.182 [2024-06-09 20:55:50.161304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110872 ] 00:13:22.182 [2024-06-09 20:55:50.330508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.441 [2024-06-09 20:55:50.537683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.009 20:55:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:23.009 20:55:51 -- common/autotest_common.sh@852 -- # return 0 00:13:23.009 20:55:51 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:23.009 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.009 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.268 Dev_1 00:13:23.268 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.268 20:55:51 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:23.268 20:55:51 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:23.268 20:55:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:23.268 20:55:51 -- common/autotest_common.sh@889 -- # local i 00:13:23.268 20:55:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:23.268 20:55:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:23.268 20:55:51 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:23.268 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.268 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.268 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.268 20:55:51 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:23.268 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.268 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.268 [ 00:13:23.268 { 00:13:23.268 "name": "Dev_1", 00:13:23.268 "aliases": [ 00:13:23.268 "1bdf2412-25da-4eb4-a834-e86400d9677c" 00:13:23.268 ], 00:13:23.268 "product_name": "Malloc disk", 00:13:23.268 "block_size": 512, 00:13:23.268 "num_blocks": 262144, 00:13:23.268 "uuid": "1bdf2412-25da-4eb4-a834-e86400d9677c", 00:13:23.268 "assigned_rate_limits": { 00:13:23.268 "rw_ios_per_sec": 0, 00:13:23.268 "rw_mbytes_per_sec": 0, 00:13:23.268 "r_mbytes_per_sec": 0, 00:13:23.268 "w_mbytes_per_sec": 0 00:13:23.268 }, 00:13:23.268 "claimed": false, 00:13:23.268 "zoned": false, 00:13:23.268 "supported_io_types": { 00:13:23.268 "read": true, 00:13:23.268 "write": true, 00:13:23.268 "unmap": true, 00:13:23.268 "write_zeroes": true, 00:13:23.268 "flush": true, 00:13:23.268 "reset": true, 00:13:23.268 "compare": false, 00:13:23.268 "compare_and_write": false, 00:13:23.268 "abort": true, 00:13:23.268 "nvme_admin": false, 00:13:23.268 "nvme_io": false 00:13:23.268 }, 00:13:23.268 "memory_domains": [ 00:13:23.268 { 00:13:23.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.268 "dma_device_type": 2 00:13:23.268 } 00:13:23.268 ], 00:13:23.268 "driver_specific": {} 00:13:23.268 } 00:13:23.268 ] 00:13:23.268 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.268 20:55:51 -- common/autotest_common.sh@895 -- # return 0 00:13:23.269 20:55:51 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:23.269 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.269 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.269 true 00:13:23.269 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.269 20:55:51 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:23.269 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.269 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.269 Dev_2 00:13:23.269 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.269 20:55:51 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:23.269 20:55:51 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:23.269 20:55:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:23.269 20:55:51 -- common/autotest_common.sh@889 -- # local i 00:13:23.269 20:55:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:23.269 20:55:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:23.269 20:55:51 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:23.269 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.269 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.269 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.269 20:55:51 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:23.269 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.269 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.269 [ 00:13:23.269 { 00:13:23.269 "name": "Dev_2", 00:13:23.269 "aliases": [ 00:13:23.269 "0802cee6-f1be-4aba-b68e-3cdc23f97c38" 00:13:23.269 ], 00:13:23.269 "product_name": "Malloc disk", 00:13:23.269 "block_size": 512, 00:13:23.269 "num_blocks": 262144, 00:13:23.269 "uuid": "0802cee6-f1be-4aba-b68e-3cdc23f97c38", 00:13:23.269 "assigned_rate_limits": { 00:13:23.269 "rw_ios_per_sec": 0, 00:13:23.269 "rw_mbytes_per_sec": 0, 00:13:23.269 "r_mbytes_per_sec": 0, 00:13:23.269 "w_mbytes_per_sec": 0 00:13:23.269 }, 00:13:23.269 "claimed": false, 00:13:23.269 "zoned": false, 00:13:23.269 "supported_io_types": { 00:13:23.269 "read": true, 00:13:23.269 "write": true, 00:13:23.269 "unmap": true, 00:13:23.269 "write_zeroes": true, 00:13:23.269 "flush": true, 00:13:23.269 "reset": true, 00:13:23.269 "compare": false, 00:13:23.269 "compare_and_write": false, 00:13:23.269 "abort": true, 00:13:23.269 "nvme_admin": false, 00:13:23.269 "nvme_io": false 00:13:23.269 }, 00:13:23.269 "memory_domains": [ 00:13:23.269 { 00:13:23.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.269 "dma_device_type": 2 00:13:23.269 } 00:13:23.269 ], 00:13:23.269 "driver_specific": {} 00:13:23.269 } 00:13:23.269 ] 00:13:23.269 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.269 20:55:51 -- common/autotest_common.sh@895 -- # return 0 00:13:23.269 20:55:51 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:23.269 20:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.269 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:13:23.269 20:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.269 20:55:51 -- bdev/blockdev.sh@513 -- # NOT wait 110872 00:13:23.269 20:55:51 -- common/autotest_common.sh@640 -- # local es=0 00:13:23.269 20:55:51 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:23.269 20:55:51 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 110872 00:13:23.269 20:55:51 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:23.269 20:55:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:23.269 20:55:51 -- common/autotest_common.sh@632 -- # type -t wait 00:13:23.269 20:55:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:23.269 20:55:51 -- common/autotest_common.sh@643 -- # wait 110872 00:13:23.528 Running I/O for 5 seconds... 00:13:23.528 task offset: 56208 on job bdev=EE_Dev_1 fails 00:13:23.528 00:13:23.528 Latency(us) 00:13:23.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.528 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:23.528 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:23.528 EE_Dev_1 : 0.00 24417.31 95.38 5549.39 0.00 440.37 137.77 789.41 00:13:23.528 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:23.528 Dev_2 : 0.00 17758.05 69.37 0.00 0.00 638.09 137.77 1184.12 00:13:23.528 =================================================================================================================== 00:13:23.528 Total : 42175.36 164.75 5549.39 0.00 547.61 137.77 1184.12 00:13:23.528 [2024-06-09 20:55:51.532086] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:23.528 request: 00:13:23.528 { 00:13:23.528 "method": "perform_tests", 00:13:23.528 "req_id": 1 00:13:23.528 } 00:13:23.528 Got JSON-RPC error response 00:13:23.528 response: 00:13:23.528 { 00:13:23.528 "code": -32603, 00:13:23.528 "message": "bdevperf failed with error Operation not permitted" 00:13:23.528 } 00:13:25.433 ************************************ 00:13:25.433 END TEST bdev_error 00:13:25.433 ************************************ 00:13:25.433 20:55:53 -- common/autotest_common.sh@643 -- # es=255 00:13:25.433 20:55:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:25.433 20:55:53 -- common/autotest_common.sh@652 -- # es=127 00:13:25.433 20:55:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:25.433 20:55:53 -- common/autotest_common.sh@660 -- # es=1 00:13:25.433 20:55:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:25.433 00:13:25.433 real 0m12.137s 00:13:25.433 user 0m12.259s 00:13:25.433 sys 0m0.877s 00:13:25.433 20:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.433 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:13:25.433 20:55:53 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:25.433 20:55:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:25.433 20:55:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.433 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:13:25.433 ************************************ 00:13:25.433 START TEST bdev_stat 00:13:25.433 ************************************ 00:13:25.433 20:55:53 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:13:25.433 20:55:53 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:25.433 20:55:53 -- bdev/blockdev.sh@594 -- # STAT_PID=110930 00:13:25.433 20:55:53 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 110930' 00:13:25.433 Process Bdev IO statistics testing pid: 110930 00:13:25.433 20:55:53 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:25.433 20:55:53 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:25.433 20:55:53 -- bdev/blockdev.sh@597 -- # waitforlisten 110930 00:13:25.433 20:55:53 -- common/autotest_common.sh@819 -- # '[' -z 110930 ']' 00:13:25.433 20:55:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.433 20:55:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:25.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.433 20:55:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.433 20:55:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:25.433 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:13:25.433 [2024-06-09 20:55:53.235354] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:25.433 [2024-06-09 20:55:53.235597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110930 ] 00:13:25.433 [2024-06-09 20:55:53.409639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.693 [2024-06-09 20:55:53.617605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.693 [2024-06-09 20:55:53.617616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.951 20:55:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:25.951 20:55:54 -- common/autotest_common.sh@852 -- # return 0 00:13:25.951 20:55:54 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:25.951 20:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.951 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 Malloc_STAT 00:13:26.210 20:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.210 20:55:54 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:26.210 20:55:54 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:13:26.210 20:55:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:26.210 20:55:54 -- common/autotest_common.sh@889 -- # local i 00:13:26.210 20:55:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:26.210 20:55:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:26.210 20:55:54 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:26.210 20:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.210 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 20:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.210 20:55:54 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:26.210 20:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.210 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:13:26.210 [ 00:13:26.210 { 00:13:26.210 "name": "Malloc_STAT", 00:13:26.210 "aliases": [ 00:13:26.210 "14b86d08-f440-4013-84da-51d6bd8ee082" 00:13:26.210 ], 00:13:26.210 "product_name": "Malloc disk", 00:13:26.210 "block_size": 512, 00:13:26.210 "num_blocks": 262144, 00:13:26.210 "uuid": "14b86d08-f440-4013-84da-51d6bd8ee082", 00:13:26.210 "assigned_rate_limits": { 00:13:26.210 "rw_ios_per_sec": 0, 00:13:26.210 "rw_mbytes_per_sec": 0, 00:13:26.210 "r_mbytes_per_sec": 0, 00:13:26.210 "w_mbytes_per_sec": 0 00:13:26.210 }, 00:13:26.210 "claimed": false, 00:13:26.210 "zoned": false, 00:13:26.210 "supported_io_types": { 00:13:26.211 "read": true, 00:13:26.211 "write": true, 00:13:26.211 "unmap": true, 00:13:26.211 "write_zeroes": true, 00:13:26.211 "flush": true, 00:13:26.211 "reset": true, 00:13:26.211 "compare": false, 00:13:26.211 "compare_and_write": false, 00:13:26.211 "abort": true, 00:13:26.211 "nvme_admin": false, 00:13:26.211 "nvme_io": false 00:13:26.211 }, 00:13:26.211 "memory_domains": [ 00:13:26.211 { 00:13:26.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.211 "dma_device_type": 2 00:13:26.211 } 00:13:26.211 ], 00:13:26.211 "driver_specific": {} 00:13:26.211 } 00:13:26.211 ] 00:13:26.211 20:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.211 20:55:54 -- common/autotest_common.sh@895 -- # return 0 00:13:26.211 20:55:54 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:26.211 20:55:54 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:26.211 Running I/O for 10 seconds... 00:13:28.119 20:55:56 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:28.119 20:55:56 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:28.119 20:55:56 -- bdev/blockdev.sh@558 -- # local iostats 00:13:28.119 20:55:56 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:28.119 20:55:56 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:28.119 20:55:56 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:28.119 20:55:56 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:28.119 20:55:56 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:28.119 20:55:56 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:28.119 20:55:56 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:28.119 20:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.119 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.119 20:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.119 20:55:56 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:28.119 "tick_rate": 2200000000, 00:13:28.119 "ticks": 1573319112810, 00:13:28.119 "bdevs": [ 00:13:28.119 { 00:13:28.119 "name": "Malloc_STAT", 00:13:28.119 "bytes_read": 957387264, 00:13:28.119 "num_read_ops": 233731, 00:13:28.119 "bytes_written": 0, 00:13:28.119 "num_write_ops": 0, 00:13:28.119 "bytes_unmapped": 0, 00:13:28.119 "num_unmap_ops": 0, 00:13:28.119 "bytes_copied": 0, 00:13:28.119 "num_copy_ops": 0, 00:13:28.119 "read_latency_ticks": 2158293027284, 00:13:28.119 "max_read_latency_ticks": 13526360, 00:13:28.119 "min_read_latency_ticks": 335244, 00:13:28.119 "write_latency_ticks": 0, 00:13:28.119 "max_write_latency_ticks": 0, 00:13:28.119 "min_write_latency_ticks": 0, 00:13:28.119 "unmap_latency_ticks": 0, 00:13:28.119 "max_unmap_latency_ticks": 0, 00:13:28.119 "min_unmap_latency_ticks": 0, 00:13:28.119 "copy_latency_ticks": 0, 00:13:28.119 "max_copy_latency_ticks": 0, 00:13:28.119 "min_copy_latency_ticks": 0, 00:13:28.119 "io_error": {} 00:13:28.119 } 00:13:28.119 ] 00:13:28.119 }' 00:13:28.119 20:55:56 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@567 -- # io_count1=233731 00:13:28.378 20:55:56 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:28.378 20:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.378 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.378 20:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.378 20:55:56 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:28.378 "tick_rate": 2200000000, 00:13:28.378 "ticks": 1573471089076, 00:13:28.378 "name": "Malloc_STAT", 00:13:28.378 "channels": [ 00:13:28.378 { 00:13:28.378 "thread_id": 2, 00:13:28.378 "bytes_read": 491782144, 00:13:28.378 "num_read_ops": 120064, 00:13:28.378 "bytes_written": 0, 00:13:28.378 "num_write_ops": 0, 00:13:28.378 "bytes_unmapped": 0, 00:13:28.378 "num_unmap_ops": 0, 00:13:28.378 "bytes_copied": 0, 00:13:28.378 "num_copy_ops": 0, 00:13:28.378 "read_latency_ticks": 1117730089335, 00:13:28.378 "max_read_latency_ticks": 13883706, 00:13:28.378 "min_read_latency_ticks": 7584942, 00:13:28.378 "write_latency_ticks": 0, 00:13:28.378 "max_write_latency_ticks": 0, 00:13:28.378 "min_write_latency_ticks": 0, 00:13:28.378 "unmap_latency_ticks": 0, 00:13:28.378 "max_unmap_latency_ticks": 0, 00:13:28.378 "min_unmap_latency_ticks": 0, 00:13:28.378 "copy_latency_ticks": 0, 00:13:28.378 "max_copy_latency_ticks": 0, 00:13:28.378 "min_copy_latency_ticks": 0 00:13:28.378 }, 00:13:28.378 { 00:13:28.378 "thread_id": 3, 00:13:28.378 "bytes_read": 497025024, 00:13:28.378 "num_read_ops": 121344, 00:13:28.378 "bytes_written": 0, 00:13:28.378 "num_write_ops": 0, 00:13:28.378 "bytes_unmapped": 0, 00:13:28.378 "num_unmap_ops": 0, 00:13:28.378 "bytes_copied": 0, 00:13:28.378 "num_copy_ops": 0, 00:13:28.378 "read_latency_ticks": 1120224537565, 00:13:28.378 "max_read_latency_ticks": 11446708, 00:13:28.378 "min_read_latency_ticks": 7586270, 00:13:28.378 "write_latency_ticks": 0, 00:13:28.378 "max_write_latency_ticks": 0, 00:13:28.378 "min_write_latency_ticks": 0, 00:13:28.378 "unmap_latency_ticks": 0, 00:13:28.378 "max_unmap_latency_ticks": 0, 00:13:28.378 "min_unmap_latency_ticks": 0, 00:13:28.378 "copy_latency_ticks": 0, 00:13:28.378 "max_copy_latency_ticks": 0, 00:13:28.378 "min_copy_latency_ticks": 0 00:13:28.378 } 00:13:28.378 ] 00:13:28.378 }' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=120064 00:13:28.378 20:55:56 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=120064 00:13:28.378 20:55:56 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=121344 00:13:28.378 20:55:56 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=241408 00:13:28.378 20:55:56 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:28.378 20:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.378 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.378 20:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.378 20:55:56 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:28.378 "tick_rate": 2200000000, 00:13:28.378 "ticks": 1573738037981, 00:13:28.378 "bdevs": [ 00:13:28.378 { 00:13:28.378 "name": "Malloc_STAT", 00:13:28.378 "bytes_read": 1042321920, 00:13:28.378 "num_read_ops": 254467, 00:13:28.378 "bytes_written": 0, 00:13:28.378 "num_write_ops": 0, 00:13:28.378 "bytes_unmapped": 0, 00:13:28.378 "num_unmap_ops": 0, 00:13:28.378 "bytes_copied": 0, 00:13:28.378 "num_copy_ops": 0, 00:13:28.378 "read_latency_ticks": 2372741992732, 00:13:28.378 "max_read_latency_ticks": 14428871, 00:13:28.378 "min_read_latency_ticks": 335244, 00:13:28.378 "write_latency_ticks": 0, 00:13:28.378 "max_write_latency_ticks": 0, 00:13:28.378 "min_write_latency_ticks": 0, 00:13:28.378 "unmap_latency_ticks": 0, 00:13:28.378 "max_unmap_latency_ticks": 0, 00:13:28.378 "min_unmap_latency_ticks": 0, 00:13:28.378 "copy_latency_ticks": 0, 00:13:28.378 "max_copy_latency_ticks": 0, 00:13:28.378 "min_copy_latency_ticks": 0, 00:13:28.378 "io_error": {} 00:13:28.378 } 00:13:28.378 ] 00:13:28.378 }' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@576 -- # io_count2=254467 00:13:28.378 20:55:56 -- bdev/blockdev.sh@581 -- # '[' 241408 -lt 233731 ']' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@581 -- # '[' 241408 -gt 254467 ']' 00:13:28.378 20:55:56 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:28.378 20:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.378 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.378 00:13:28.378 Latency(us) 00:13:28.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.378 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:28.378 Malloc_STAT : 2.18 59676.42 233.11 0.00 0.00 4279.79 1444.77 6821.70 00:13:28.378 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:28.378 Malloc_STAT : 2.18 60490.88 236.29 0.00 0.00 4222.11 1347.96 5213.09 00:13:28.378 =================================================================================================================== 00:13:28.378 Total : 120167.30 469.40 0.00 0.00 4250.75 1347.96 6821.70 00:13:28.637 0 00:13:28.637 20:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.637 20:55:56 -- bdev/blockdev.sh@607 -- # killprocess 110930 00:13:28.637 20:55:56 -- common/autotest_common.sh@926 -- # '[' -z 110930 ']' 00:13:28.637 20:55:56 -- common/autotest_common.sh@930 -- # kill -0 110930 00:13:28.637 20:55:56 -- common/autotest_common.sh@931 -- # uname 00:13:28.637 20:55:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:28.637 20:55:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110930 00:13:28.637 20:55:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:28.637 killing process with pid 110930 00:13:28.637 20:55:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:28.637 20:55:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110930' 00:13:28.637 20:55:56 -- common/autotest_common.sh@945 -- # kill 110930 00:13:28.637 Received shutdown signal, test time was about 2.322921 seconds 00:13:28.637 00:13:28.637 Latency(us) 00:13:28.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.637 =================================================================================================================== 00:13:28.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:28.637 20:55:56 -- common/autotest_common.sh@950 -- # wait 110930 00:13:30.015 20:55:57 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:30.015 00:13:30.015 real 0m4.740s 00:13:30.015 user 0m8.942s 00:13:30.015 sys 0m0.371s 00:13:30.015 20:55:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.015 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:13:30.015 ************************************ 00:13:30.015 END TEST bdev_stat 00:13:30.015 ************************************ 00:13:30.015 20:55:57 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:30.015 20:55:57 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:30.015 20:55:57 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:30.015 20:55:57 -- bdev/blockdev.sh@809 -- # cleanup 00:13:30.015 20:55:57 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:30.015 20:55:57 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:30.015 20:55:57 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:30.015 20:55:57 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:30.015 20:55:57 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:30.015 20:55:57 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:30.015 00:13:30.015 real 2m20.511s 00:13:30.015 user 5m47.817s 00:13:30.015 sys 0m21.350s 00:13:30.015 20:55:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.015 ************************************ 00:13:30.015 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:13:30.015 END TEST blockdev_general 00:13:30.015 ************************************ 00:13:30.015 20:55:57 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:30.015 20:55:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:30.015 20:55:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.015 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:13:30.015 ************************************ 00:13:30.015 START TEST bdev_raid 00:13:30.015 ************************************ 00:13:30.015 20:55:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:30.015 * Looking for test storage... 00:13:30.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:30.015 20:55:58 -- bdev/nbd_common.sh@6 -- # set -e 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:30.015 20:55:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:30.015 20:55:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.015 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:13:30.015 ************************************ 00:13:30.015 START TEST raid_function_test_raid0 00:13:30.015 ************************************ 00:13:30.015 20:55:58 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@86 -- # raid_pid=111086 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 111086' 00:13:30.015 Process raid pid: 111086 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@88 -- # waitforlisten 111086 /var/tmp/spdk-raid.sock 00:13:30.015 20:55:58 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:30.015 20:55:58 -- common/autotest_common.sh@819 -- # '[' -z 111086 ']' 00:13:30.015 20:55:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:30.015 20:55:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:30.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:30.015 20:55:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:30.015 20:55:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:30.015 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:13:30.015 [2024-06-09 20:55:58.178884] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:30.015 [2024-06-09 20:55:58.179275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.274 [2024-06-09 20:55:58.343731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.532 [2024-06-09 20:55:58.541602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.791 [2024-06-09 20:55:58.726059] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.050 20:55:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:31.050 20:55:59 -- common/autotest_common.sh@852 -- # return 0 00:13:31.050 20:55:59 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:31.050 20:55:59 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:31.050 20:55:59 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:31.050 20:55:59 -- bdev/bdev_raid.sh@70 -- # cat 00:13:31.050 20:55:59 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:31.309 [2024-06-09 20:55:59.411600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:31.309 [2024-06-09 20:55:59.413616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:31.310 [2024-06-09 20:55:59.413718] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:13:31.310 [2024-06-09 20:55:59.413732] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:31.310 [2024-06-09 20:55:59.413862] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:31.310 [2024-06-09 20:55:59.414231] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:13:31.310 [2024-06-09 20:55:59.414257] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:13:31.310 [2024-06-09 20:55:59.414430] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.310 Base_1 00:13:31.310 Base_2 00:13:31.310 20:55:59 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:31.310 20:55:59 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:31.310 20:55:59 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:31.569 20:55:59 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:31.569 20:55:59 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:31.569 20:55:59 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@12 -- # local i 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.569 20:55:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:31.828 [2024-06-09 20:55:59.895707] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:13:31.828 /dev/nbd0 00:13:31.828 20:55:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.828 20:55:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.828 20:55:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:31.828 20:55:59 -- common/autotest_common.sh@857 -- # local i 00:13:31.828 20:55:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:31.828 20:55:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:31.828 20:55:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:31.828 20:55:59 -- common/autotest_common.sh@861 -- # break 00:13:31.828 20:55:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:31.828 20:55:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:31.828 20:55:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.828 1+0 records in 00:13:31.828 1+0 records out 00:13:31.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437112 s, 9.4 MB/s 00:13:31.828 20:55:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.828 20:55:59 -- common/autotest_common.sh@874 -- # size=4096 00:13:31.828 20:55:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.828 20:55:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:31.828 20:55:59 -- common/autotest_common.sh@877 -- # return 0 00:13:31.828 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.828 20:55:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.828 20:55:59 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:31.828 20:55:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:31.828 20:55:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:32.087 20:56:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:32.087 { 00:13:32.087 "nbd_device": "/dev/nbd0", 00:13:32.087 "bdev_name": "raid" 00:13:32.087 } 00:13:32.087 ]' 00:13:32.087 20:56:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:32.087 { 00:13:32.087 "nbd_device": "/dev/nbd0", 00:13:32.087 "bdev_name": "raid" 00:13:32.087 } 00:13:32.087 ]' 00:13:32.087 20:56:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:32.346 20:56:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:32.346 20:56:00 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:32.346 20:56:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:32.346 20:56:00 -- bdev/nbd_common.sh@65 -- # count=1 00:13:32.346 20:56:00 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:32.346 4096+0 records in 00:13:32.346 4096+0 records out 00:13:32.346 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0281303 s, 74.6 MB/s 00:13:32.346 20:56:00 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:32.605 4096+0 records in 00:13:32.605 4096+0 records out 00:13:32.605 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.266561 s, 7.9 MB/s 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:32.605 128+0 records in 00:13:32.605 128+0 records out 00:13:32.605 65536 bytes (66 kB, 64 KiB) copied, 0.000645596 s, 102 MB/s 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:32.605 2035+0 records in 00:13:32.605 2035+0 records out 00:13:32.605 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00479514 s, 217 MB/s 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:32.605 456+0 records in 00:13:32.605 456+0 records out 00:13:32.605 233472 bytes (233 kB, 228 KiB) copied, 0.00087825 s, 266 MB/s 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:32.605 20:56:00 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:32.605 20:56:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:32.605 20:56:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.605 20:56:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.605 20:56:00 -- bdev/nbd_common.sh@51 -- # local i 00:13:32.605 20:56:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.605 20:56:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.864 [2024-06-09 20:56:00.944602] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@41 -- # break 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.864 20:56:00 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:32.864 20:56:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:33.122 20:56:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:33.122 20:56:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:33.122 20:56:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:33.122 20:56:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:33.122 20:56:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:33.122 20:56:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:33.381 20:56:01 -- bdev/nbd_common.sh@65 -- # true 00:13:33.381 20:56:01 -- bdev/nbd_common.sh@65 -- # count=0 00:13:33.381 20:56:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:33.381 20:56:01 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:33.381 20:56:01 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:33.381 20:56:01 -- bdev/bdev_raid.sh@111 -- # killprocess 111086 00:13:33.381 20:56:01 -- common/autotest_common.sh@926 -- # '[' -z 111086 ']' 00:13:33.381 20:56:01 -- common/autotest_common.sh@930 -- # kill -0 111086 00:13:33.381 20:56:01 -- common/autotest_common.sh@931 -- # uname 00:13:33.381 20:56:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:33.381 20:56:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111086 00:13:33.381 20:56:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:33.381 20:56:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:33.381 killing process with pid 111086 00:13:33.381 20:56:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111086' 00:13:33.381 20:56:01 -- common/autotest_common.sh@945 -- # kill 111086 00:13:33.381 [2024-06-09 20:56:01.329669] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.381 20:56:01 -- common/autotest_common.sh@950 -- # wait 111086 00:13:33.381 [2024-06-09 20:56:01.329794] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.381 [2024-06-09 20:56:01.329860] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.381 [2024-06-09 20:56:01.329875] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:13:33.381 [2024-06-09 20:56:01.475011] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:34.757 00:13:34.757 real 0m4.378s 00:13:34.757 user 0m5.651s 00:13:34.757 sys 0m0.966s 00:13:34.757 20:56:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.757 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:34.757 ************************************ 00:13:34.757 END TEST raid_function_test_raid0 00:13:34.757 ************************************ 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:34.757 20:56:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:34.757 20:56:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:34.757 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:34.757 ************************************ 00:13:34.757 START TEST raid_function_test_concat 00:13:34.757 ************************************ 00:13:34.757 20:56:02 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@86 -- # raid_pid=111242 00:13:34.757 Process raid pid: 111242 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 111242' 00:13:34.757 20:56:02 -- bdev/bdev_raid.sh@88 -- # waitforlisten 111242 /var/tmp/spdk-raid.sock 00:13:34.757 20:56:02 -- common/autotest_common.sh@819 -- # '[' -z 111242 ']' 00:13:34.757 20:56:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:34.757 20:56:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:34.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:34.757 20:56:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:34.757 20:56:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:34.757 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:34.757 [2024-06-09 20:56:02.615493] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:34.757 [2024-06-09 20:56:02.615729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.757 [2024-06-09 20:56:02.777248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.016 [2024-06-09 20:56:02.967947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.016 [2024-06-09 20:56:03.154018] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.583 20:56:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:35.583 20:56:03 -- common/autotest_common.sh@852 -- # return 0 00:13:35.583 20:56:03 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:35.583 20:56:03 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:35.583 20:56:03 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:35.583 20:56:03 -- bdev/bdev_raid.sh@70 -- # cat 00:13:35.583 20:56:03 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:35.842 [2024-06-09 20:56:03.792449] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:35.842 [2024-06-09 20:56:03.794501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:35.842 [2024-06-09 20:56:03.794601] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:13:35.842 [2024-06-09 20:56:03.794615] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:35.842 [2024-06-09 20:56:03.794747] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:35.842 [2024-06-09 20:56:03.795156] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:13:35.842 [2024-06-09 20:56:03.795183] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:13:35.842 [2024-06-09 20:56:03.795338] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.842 Base_1 00:13:35.842 Base_2 00:13:35.842 20:56:03 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:35.842 20:56:03 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:35.842 20:56:03 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:36.100 20:56:04 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:36.100 20:56:04 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:36.100 20:56:04 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:36.100 20:56:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:36.100 20:56:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:36.100 20:56:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.100 20:56:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:36.100 20:56:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.100 20:56:04 -- bdev/nbd_common.sh@12 -- # local i 00:13:36.101 20:56:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.101 20:56:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.101 20:56:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:36.359 [2024-06-09 20:56:04.336593] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:13:36.359 /dev/nbd0 00:13:36.359 20:56:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.359 20:56:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.359 20:56:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:36.359 20:56:04 -- common/autotest_common.sh@857 -- # local i 00:13:36.359 20:56:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:36.359 20:56:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:36.359 20:56:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:36.359 20:56:04 -- common/autotest_common.sh@861 -- # break 00:13:36.359 20:56:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:36.359 20:56:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:36.359 20:56:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.359 1+0 records in 00:13:36.359 1+0 records out 00:13:36.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331497 s, 12.4 MB/s 00:13:36.359 20:56:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.359 20:56:04 -- common/autotest_common.sh@874 -- # size=4096 00:13:36.359 20:56:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.359 20:56:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:36.359 20:56:04 -- common/autotest_common.sh@877 -- # return 0 00:13:36.359 20:56:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.359 20:56:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.359 20:56:04 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:36.359 20:56:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:36.359 20:56:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:36.617 20:56:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:36.617 { 00:13:36.617 "nbd_device": "/dev/nbd0", 00:13:36.617 "bdev_name": "raid" 00:13:36.617 } 00:13:36.618 ]' 00:13:36.618 20:56:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:36.618 { 00:13:36.618 "nbd_device": "/dev/nbd0", 00:13:36.618 "bdev_name": "raid" 00:13:36.618 } 00:13:36.618 ]' 00:13:36.618 20:56:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.618 20:56:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:36.618 20:56:04 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:36.618 20:56:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.618 20:56:04 -- bdev/nbd_common.sh@65 -- # count=1 00:13:36.618 20:56:04 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:36.618 4096+0 records in 00:13:36.618 4096+0 records out 00:13:36.618 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0236169 s, 88.8 MB/s 00:13:36.618 20:56:04 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:36.875 4096+0 records in 00:13:36.875 4096+0 records out 00:13:36.875 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.249337 s, 8.4 MB/s 00:13:36.875 20:56:04 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:36.875 20:56:04 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:36.875 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:36.876 128+0 records in 00:13:36.876 128+0 records out 00:13:36.876 65536 bytes (66 kB, 64 KiB) copied, 0.000860698 s, 76.1 MB/s 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:36.876 2035+0 records in 00:13:36.876 2035+0 records out 00:13:36.876 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0085158 s, 122 MB/s 00:13:36.876 20:56:05 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:37.134 456+0 records in 00:13:37.134 456+0 records out 00:13:37.134 233472 bytes (233 kB, 228 KiB) copied, 0.00201524 s, 116 MB/s 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:37.134 20:56:05 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:37.134 20:56:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.134 20:56:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.134 20:56:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.134 20:56:05 -- bdev/nbd_common.sh@51 -- # local i 00:13:37.134 20:56:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.134 20:56:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.392 [2024-06-09 20:56:05.368831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@41 -- # break 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.392 20:56:05 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.392 20:56:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@65 -- # true 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@65 -- # count=0 00:13:37.651 20:56:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:37.651 20:56:05 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:37.651 20:56:05 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:37.651 20:56:05 -- bdev/bdev_raid.sh@111 -- # killprocess 111242 00:13:37.651 20:56:05 -- common/autotest_common.sh@926 -- # '[' -z 111242 ']' 00:13:37.651 20:56:05 -- common/autotest_common.sh@930 -- # kill -0 111242 00:13:37.651 20:56:05 -- common/autotest_common.sh@931 -- # uname 00:13:37.651 20:56:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:37.651 20:56:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111242 00:13:37.651 20:56:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:37.651 20:56:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:37.651 killing process with pid 111242 00:13:37.651 20:56:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111242' 00:13:37.651 20:56:05 -- common/autotest_common.sh@945 -- # kill 111242 00:13:37.651 [2024-06-09 20:56:05.751877] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.651 20:56:05 -- common/autotest_common.sh@950 -- # wait 111242 00:13:37.651 [2024-06-09 20:56:05.751967] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.651 [2024-06-09 20:56:05.752025] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.651 [2024-06-09 20:56:05.752037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:13:37.909 [2024-06-09 20:56:05.896062] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:38.844 00:13:38.844 real 0m4.294s 00:13:38.844 user 0m5.634s 00:13:38.844 sys 0m0.915s 00:13:38.844 20:56:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.844 ************************************ 00:13:38.844 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:13:38.844 END TEST raid_function_test_concat 00:13:38.844 ************************************ 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:13:38.844 20:56:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:38.844 20:56:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.844 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:13:38.844 ************************************ 00:13:38.844 START TEST raid0_resize_test 00:13:38.844 ************************************ 00:13:38.844 20:56:06 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@301 -- # raid_pid=111397 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:38.844 Process raid pid: 111397 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 111397' 00:13:38.844 20:56:06 -- bdev/bdev_raid.sh@303 -- # waitforlisten 111397 /var/tmp/spdk-raid.sock 00:13:38.844 20:56:06 -- common/autotest_common.sh@819 -- # '[' -z 111397 ']' 00:13:38.844 20:56:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:38.844 20:56:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:38.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:38.844 20:56:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:38.844 20:56:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:38.844 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:13:38.844 [2024-06-09 20:56:06.947718] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:38.844 [2024-06-09 20:56:06.947884] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.102 [2024-06-09 20:56:07.092645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.102 [2024-06-09 20:56:07.261839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.359 [2024-06-09 20:56:07.430190] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.927 20:56:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.927 20:56:07 -- common/autotest_common.sh@852 -- # return 0 00:13:39.927 20:56:07 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:40.185 Base_1 00:13:40.185 20:56:08 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:40.185 Base_2 00:13:40.185 20:56:08 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:40.444 [2024-06-09 20:56:08.533897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:40.444 [2024-06-09 20:56:08.535743] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:40.444 [2024-06-09 20:56:08.535812] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:13:40.444 [2024-06-09 20:56:08.535824] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:40.444 [2024-06-09 20:56:08.535944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:13:40.444 [2024-06-09 20:56:08.536295] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:13:40.444 [2024-06-09 20:56:08.536319] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80 00:13:40.444 [2024-06-09 20:56:08.536470] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.444 20:56:08 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:40.704 [2024-06-09 20:56:08.729959] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:40.704 [2024-06-09 20:56:08.730005] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:40.704 true 00:13:40.704 20:56:08 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:40.704 20:56:08 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:13:40.963 [2024-06-09 20:56:08.978176] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.963 20:56:08 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:13:40.963 20:56:08 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:13:40.963 20:56:08 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:13:40.963 20:56:08 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:41.222 [2024-06-09 20:56:09.170046] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:41.222 [2024-06-09 20:56:09.170075] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:41.222 [2024-06-09 20:56:09.170116] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:13:41.222 [2024-06-09 20:56:09.170189] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:41.222 true 00:13:41.222 20:56:09 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:13:41.222 20:56:09 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:41.481 [2024-06-09 20:56:09.402224] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.481 20:56:09 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:13:41.481 20:56:09 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:13:41.481 20:56:09 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:13:41.481 20:56:09 -- bdev/bdev_raid.sh@332 -- # killprocess 111397 00:13:41.481 20:56:09 -- common/autotest_common.sh@926 -- # '[' -z 111397 ']' 00:13:41.481 20:56:09 -- common/autotest_common.sh@930 -- # kill -0 111397 00:13:41.481 20:56:09 -- common/autotest_common.sh@931 -- # uname 00:13:41.481 20:56:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.481 20:56:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111397 00:13:41.481 20:56:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:41.481 killing process with pid 111397 00:13:41.481 20:56:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:41.481 20:56:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111397' 00:13:41.481 20:56:09 -- common/autotest_common.sh@945 -- # kill 111397 00:13:41.481 [2024-06-09 20:56:09.442383] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.481 20:56:09 -- common/autotest_common.sh@950 -- # wait 111397 00:13:41.481 [2024-06-09 20:56:09.442501] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.481 [2024-06-09 20:56:09.442559] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.481 [2024-06-09 20:56:09.442569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline 00:13:41.481 [2024-06-09 20:56:09.443156] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@334 -- # return 0 00:13:42.421 00:13:42.421 real 0m3.487s 00:13:42.421 user 0m4.972s 00:13:42.421 sys 0m0.496s 00:13:42.421 20:56:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.421 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.421 ************************************ 00:13:42.421 END TEST raid0_resize_test 00:13:42.421 ************************************ 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:42.421 20:56:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:42.421 20:56:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.421 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.421 ************************************ 00:13:42.421 START TEST raid_state_function_test 00:13:42.421 ************************************ 00:13:42.421 20:56:10 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=111488 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 111488' 00:13:42.421 Process raid pid: 111488 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:42.421 20:56:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 111488 /var/tmp/spdk-raid.sock 00:13:42.421 20:56:10 -- common/autotest_common.sh@819 -- # '[' -z 111488 ']' 00:13:42.421 20:56:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:42.421 20:56:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:42.421 20:56:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:42.421 20:56:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.421 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.421 [2024-06-09 20:56:10.509231] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:42.421 [2024-06-09 20:56:10.509427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.680 [2024-06-09 20:56:10.669787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.680 [2024-06-09 20:56:10.844341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.939 [2024-06-09 20:56:11.019270] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.506 20:56:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.506 20:56:11 -- common/autotest_common.sh@852 -- # return 0 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:43.506 [2024-06-09 20:56:11.635139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.506 [2024-06-09 20:56:11.635238] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.506 [2024-06-09 20:56:11.635267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.506 [2024-06-09 20:56:11.635288] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.506 20:56:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.764 20:56:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:43.765 "name": "Existed_Raid", 00:13:43.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.765 "strip_size_kb": 64, 00:13:43.765 "state": "configuring", 00:13:43.765 "raid_level": "raid0", 00:13:43.765 "superblock": false, 00:13:43.765 "num_base_bdevs": 2, 00:13:43.765 "num_base_bdevs_discovered": 0, 00:13:43.765 "num_base_bdevs_operational": 2, 00:13:43.765 "base_bdevs_list": [ 00:13:43.765 { 00:13:43.765 "name": "BaseBdev1", 00:13:43.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.765 "is_configured": false, 00:13:43.765 "data_offset": 0, 00:13:43.765 "data_size": 0 00:13:43.765 }, 00:13:43.765 { 00:13:43.765 "name": "BaseBdev2", 00:13:43.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.765 "is_configured": false, 00:13:43.765 "data_offset": 0, 00:13:43.765 "data_size": 0 00:13:43.765 } 00:13:43.765 ] 00:13:43.765 }' 00:13:43.765 20:56:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:43.765 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:13:44.332 20:56:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:44.590 [2024-06-09 20:56:12.647314] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.590 [2024-06-09 20:56:12.647367] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:13:44.590 20:56:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:44.849 [2024-06-09 20:56:12.895401] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.849 [2024-06-09 20:56:12.895501] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.849 [2024-06-09 20:56:12.895545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.849 [2024-06-09 20:56:12.895569] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.849 20:56:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:45.108 [2024-06-09 20:56:13.185314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.108 BaseBdev1 00:13:45.108 20:56:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:45.108 20:56:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:45.108 20:56:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:45.108 20:56:13 -- common/autotest_common.sh@889 -- # local i 00:13:45.108 20:56:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:45.108 20:56:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:45.108 20:56:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:45.367 20:56:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.625 [ 00:13:45.625 { 00:13:45.625 "name": "BaseBdev1", 00:13:45.625 "aliases": [ 00:13:45.625 "6314a6c2-1e50-49ac-bba1-079ae1e8c284" 00:13:45.625 ], 00:13:45.625 "product_name": "Malloc disk", 00:13:45.625 "block_size": 512, 00:13:45.625 "num_blocks": 65536, 00:13:45.625 "uuid": "6314a6c2-1e50-49ac-bba1-079ae1e8c284", 00:13:45.625 "assigned_rate_limits": { 00:13:45.625 "rw_ios_per_sec": 0, 00:13:45.625 "rw_mbytes_per_sec": 0, 00:13:45.625 "r_mbytes_per_sec": 0, 00:13:45.625 "w_mbytes_per_sec": 0 00:13:45.625 }, 00:13:45.625 "claimed": true, 00:13:45.625 "claim_type": "exclusive_write", 00:13:45.625 "zoned": false, 00:13:45.625 "supported_io_types": { 00:13:45.625 "read": true, 00:13:45.625 "write": true, 00:13:45.625 "unmap": true, 00:13:45.625 "write_zeroes": true, 00:13:45.625 "flush": true, 00:13:45.625 "reset": true, 00:13:45.625 "compare": false, 00:13:45.625 "compare_and_write": false, 00:13:45.625 "abort": true, 00:13:45.625 "nvme_admin": false, 00:13:45.625 "nvme_io": false 00:13:45.625 }, 00:13:45.625 "memory_domains": [ 00:13:45.625 { 00:13:45.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.625 "dma_device_type": 2 00:13:45.625 } 00:13:45.625 ], 00:13:45.625 "driver_specific": {} 00:13:45.625 } 00:13:45.625 ] 00:13:45.625 20:56:13 -- common/autotest_common.sh@895 -- # return 0 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.625 20:56:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.884 20:56:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:45.884 "name": "Existed_Raid", 00:13:45.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.884 "strip_size_kb": 64, 00:13:45.884 "state": "configuring", 00:13:45.884 "raid_level": "raid0", 00:13:45.884 "superblock": false, 00:13:45.884 "num_base_bdevs": 2, 00:13:45.884 "num_base_bdevs_discovered": 1, 00:13:45.884 "num_base_bdevs_operational": 2, 00:13:45.884 "base_bdevs_list": [ 00:13:45.884 { 00:13:45.884 "name": "BaseBdev1", 00:13:45.884 "uuid": "6314a6c2-1e50-49ac-bba1-079ae1e8c284", 00:13:45.884 "is_configured": true, 00:13:45.884 "data_offset": 0, 00:13:45.884 "data_size": 65536 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "name": "BaseBdev2", 00:13:45.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.884 "is_configured": false, 00:13:45.884 "data_offset": 0, 00:13:45.884 "data_size": 0 00:13:45.884 } 00:13:45.884 ] 00:13:45.884 }' 00:13:45.884 20:56:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:45.884 20:56:13 -- common/autotest_common.sh@10 -- # set +x 00:13:46.451 20:56:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:46.451 [2024-06-09 20:56:14.593693] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.451 [2024-06-09 20:56:14.593778] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:46.451 20:56:14 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:46.451 20:56:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:46.709 [2024-06-09 20:56:14.821802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.709 [2024-06-09 20:56:14.823787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.709 [2024-06-09 20:56:14.823867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.709 20:56:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.967 20:56:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.967 "name": "Existed_Raid", 00:13:46.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.967 "strip_size_kb": 64, 00:13:46.967 "state": "configuring", 00:13:46.967 "raid_level": "raid0", 00:13:46.967 "superblock": false, 00:13:46.967 "num_base_bdevs": 2, 00:13:46.967 "num_base_bdevs_discovered": 1, 00:13:46.967 "num_base_bdevs_operational": 2, 00:13:46.967 "base_bdevs_list": [ 00:13:46.967 { 00:13:46.967 "name": "BaseBdev1", 00:13:46.967 "uuid": "6314a6c2-1e50-49ac-bba1-079ae1e8c284", 00:13:46.967 "is_configured": true, 00:13:46.967 "data_offset": 0, 00:13:46.967 "data_size": 65536 00:13:46.967 }, 00:13:46.967 { 00:13:46.967 "name": "BaseBdev2", 00:13:46.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.967 "is_configured": false, 00:13:46.967 "data_offset": 0, 00:13:46.967 "data_size": 0 00:13:46.967 } 00:13:46.967 ] 00:13:46.967 }' 00:13:46.967 20:56:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.967 20:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:47.533 20:56:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.791 [2024-06-09 20:56:15.874335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.791 [2024-06-09 20:56:15.874401] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:13:47.791 [2024-06-09 20:56:15.874410] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:47.791 [2024-06-09 20:56:15.874531] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:47.791 [2024-06-09 20:56:15.874936] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:13:47.791 [2024-06-09 20:56:15.874963] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:13:47.791 [2024-06-09 20:56:15.875275] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.791 BaseBdev2 00:13:47.791 20:56:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:47.791 20:56:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:47.791 20:56:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:47.791 20:56:15 -- common/autotest_common.sh@889 -- # local i 00:13:47.791 20:56:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:47.791 20:56:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:47.791 20:56:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.049 20:56:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:48.307 [ 00:13:48.307 { 00:13:48.307 "name": "BaseBdev2", 00:13:48.307 "aliases": [ 00:13:48.307 "2844eb09-c678-4a22-b318-f9bf8b8c84e7" 00:13:48.307 ], 00:13:48.307 "product_name": "Malloc disk", 00:13:48.307 "block_size": 512, 00:13:48.307 "num_blocks": 65536, 00:13:48.307 "uuid": "2844eb09-c678-4a22-b318-f9bf8b8c84e7", 00:13:48.307 "assigned_rate_limits": { 00:13:48.307 "rw_ios_per_sec": 0, 00:13:48.307 "rw_mbytes_per_sec": 0, 00:13:48.307 "r_mbytes_per_sec": 0, 00:13:48.307 "w_mbytes_per_sec": 0 00:13:48.307 }, 00:13:48.307 "claimed": true, 00:13:48.307 "claim_type": "exclusive_write", 00:13:48.307 "zoned": false, 00:13:48.307 "supported_io_types": { 00:13:48.307 "read": true, 00:13:48.307 "write": true, 00:13:48.307 "unmap": true, 00:13:48.307 "write_zeroes": true, 00:13:48.307 "flush": true, 00:13:48.307 "reset": true, 00:13:48.307 "compare": false, 00:13:48.307 "compare_and_write": false, 00:13:48.307 "abort": true, 00:13:48.307 "nvme_admin": false, 00:13:48.307 "nvme_io": false 00:13:48.307 }, 00:13:48.307 "memory_domains": [ 00:13:48.307 { 00:13:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.307 "dma_device_type": 2 00:13:48.307 } 00:13:48.307 ], 00:13:48.307 "driver_specific": {} 00:13:48.307 } 00:13:48.307 ] 00:13:48.307 20:56:16 -- common/autotest_common.sh@895 -- # return 0 00:13:48.307 20:56:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:48.307 20:56:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:48.307 20:56:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:48.307 20:56:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.308 20:56:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.566 20:56:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.566 "name": "Existed_Raid", 00:13:48.566 "uuid": "86ce687d-9c63-425b-8dae-5676e88a9fd2", 00:13:48.566 "strip_size_kb": 64, 00:13:48.566 "state": "online", 00:13:48.566 "raid_level": "raid0", 00:13:48.566 "superblock": false, 00:13:48.566 "num_base_bdevs": 2, 00:13:48.566 "num_base_bdevs_discovered": 2, 00:13:48.566 "num_base_bdevs_operational": 2, 00:13:48.566 "base_bdevs_list": [ 00:13:48.566 { 00:13:48.566 "name": "BaseBdev1", 00:13:48.566 "uuid": "6314a6c2-1e50-49ac-bba1-079ae1e8c284", 00:13:48.566 "is_configured": true, 00:13:48.566 "data_offset": 0, 00:13:48.566 "data_size": 65536 00:13:48.566 }, 00:13:48.566 { 00:13:48.566 "name": "BaseBdev2", 00:13:48.566 "uuid": "2844eb09-c678-4a22-b318-f9bf8b8c84e7", 00:13:48.566 "is_configured": true, 00:13:48.566 "data_offset": 0, 00:13:48.566 "data_size": 65536 00:13:48.566 } 00:13:48.566 ] 00:13:48.566 }' 00:13:48.566 20:56:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.566 20:56:16 -- common/autotest_common.sh@10 -- # set +x 00:13:49.134 20:56:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:49.393 [2024-06-09 20:56:17.386766] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.393 [2024-06-09 20:56:17.386819] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.393 [2024-06-09 20:56:17.386907] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.393 20:56:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.652 20:56:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:49.652 "name": "Existed_Raid", 00:13:49.652 "uuid": "86ce687d-9c63-425b-8dae-5676e88a9fd2", 00:13:49.652 "strip_size_kb": 64, 00:13:49.652 "state": "offline", 00:13:49.652 "raid_level": "raid0", 00:13:49.652 "superblock": false, 00:13:49.652 "num_base_bdevs": 2, 00:13:49.652 "num_base_bdevs_discovered": 1, 00:13:49.652 "num_base_bdevs_operational": 1, 00:13:49.652 "base_bdevs_list": [ 00:13:49.652 { 00:13:49.652 "name": null, 00:13:49.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.652 "is_configured": false, 00:13:49.652 "data_offset": 0, 00:13:49.652 "data_size": 65536 00:13:49.652 }, 00:13:49.652 { 00:13:49.652 "name": "BaseBdev2", 00:13:49.652 "uuid": "2844eb09-c678-4a22-b318-f9bf8b8c84e7", 00:13:49.652 "is_configured": true, 00:13:49.652 "data_offset": 0, 00:13:49.652 "data_size": 65536 00:13:49.652 } 00:13:49.652 ] 00:13:49.652 }' 00:13:49.652 20:56:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:49.652 20:56:17 -- common/autotest_common.sh@10 -- # set +x 00:13:50.219 20:56:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:50.219 20:56:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:50.219 20:56:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:50.219 20:56:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.478 20:56:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:50.478 20:56:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.478 20:56:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:50.737 [2024-06-09 20:56:18.688475] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.737 [2024-06-09 20:56:18.688594] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:13:50.737 20:56:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:50.737 20:56:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:50.737 20:56:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.737 20:56:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.996 20:56:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:50.996 20:56:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:50.996 20:56:19 -- bdev/bdev_raid.sh@287 -- # killprocess 111488 00:13:50.996 20:56:19 -- common/autotest_common.sh@926 -- # '[' -z 111488 ']' 00:13:50.996 20:56:19 -- common/autotest_common.sh@930 -- # kill -0 111488 00:13:50.996 20:56:19 -- common/autotest_common.sh@931 -- # uname 00:13:50.996 20:56:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:50.996 20:56:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111488 00:13:50.996 20:56:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:50.996 20:56:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:50.996 20:56:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111488' 00:13:50.996 killing process with pid 111488 00:13:50.996 20:56:19 -- common/autotest_common.sh@945 -- # kill 111488 00:13:50.996 20:56:19 -- common/autotest_common.sh@950 -- # wait 111488 00:13:50.996 [2024-06-09 20:56:19.051377] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.996 [2024-06-09 20:56:19.051515] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.933 ************************************ 00:13:51.933 END TEST raid_state_function_test 00:13:51.933 ************************************ 00:13:51.933 20:56:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:51.933 00:13:51.933 real 0m9.653s 00:13:51.933 user 0m16.664s 00:13:51.933 sys 0m1.220s 00:13:51.934 20:56:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.934 20:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:52.193 20:56:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:52.193 20:56:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.193 20:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:52.193 ************************************ 00:13:52.193 START TEST raid_state_function_test_sb 00:13:52.193 ************************************ 00:13:52.193 20:56:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=111803 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 111803' 00:13:52.193 Process raid pid: 111803 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:52.193 20:56:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 111803 /var/tmp/spdk-raid.sock 00:13:52.193 20:56:20 -- common/autotest_common.sh@819 -- # '[' -z 111803 ']' 00:13:52.193 20:56:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:52.193 20:56:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:52.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:52.193 20:56:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:52.193 20:56:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:52.193 20:56:20 -- common/autotest_common.sh@10 -- # set +x 00:13:52.193 [2024-06-09 20:56:20.223177] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:52.193 [2024-06-09 20:56:20.223395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.452 [2024-06-09 20:56:20.393357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.452 [2024-06-09 20:56:20.611717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.711 [2024-06-09 20:56:20.803434] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.970 20:56:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.970 20:56:21 -- common/autotest_common.sh@852 -- # return 0 00:13:52.970 20:56:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:53.229 [2024-06-09 20:56:21.364036] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.229 [2024-06-09 20:56:21.364168] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.229 [2024-06-09 20:56:21.364183] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.229 [2024-06-09 20:56:21.364212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.229 20:56:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.488 20:56:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:53.488 "name": "Existed_Raid", 00:13:53.488 "uuid": "a8ecdf81-5717-4ef7-8708-17d21ff1d285", 00:13:53.488 "strip_size_kb": 64, 00:13:53.488 "state": "configuring", 00:13:53.488 "raid_level": "raid0", 00:13:53.488 "superblock": true, 00:13:53.488 "num_base_bdevs": 2, 00:13:53.488 "num_base_bdevs_discovered": 0, 00:13:53.488 "num_base_bdevs_operational": 2, 00:13:53.488 "base_bdevs_list": [ 00:13:53.488 { 00:13:53.488 "name": "BaseBdev1", 00:13:53.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.488 "is_configured": false, 00:13:53.488 "data_offset": 0, 00:13:53.488 "data_size": 0 00:13:53.488 }, 00:13:53.488 { 00:13:53.488 "name": "BaseBdev2", 00:13:53.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.488 "is_configured": false, 00:13:53.488 "data_offset": 0, 00:13:53.488 "data_size": 0 00:13:53.488 } 00:13:53.488 ] 00:13:53.488 }' 00:13:53.488 20:56:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:53.488 20:56:21 -- common/autotest_common.sh@10 -- # set +x 00:13:54.425 20:56:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:54.425 [2024-06-09 20:56:22.508032] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.425 [2024-06-09 20:56:22.508113] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:13:54.425 20:56:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:54.684 [2024-06-09 20:56:22.708137] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.684 [2024-06-09 20:56:22.708272] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.684 [2024-06-09 20:56:22.708303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.684 [2024-06-09 20:56:22.708329] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.684 20:56:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.943 [2024-06-09 20:56:22.990115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.943 BaseBdev1 00:13:54.943 20:56:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:54.943 20:56:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:54.943 20:56:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.943 20:56:23 -- common/autotest_common.sh@889 -- # local i 00:13:54.943 20:56:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.943 20:56:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.943 20:56:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:55.202 20:56:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.462 [ 00:13:55.462 { 00:13:55.462 "name": "BaseBdev1", 00:13:55.462 "aliases": [ 00:13:55.462 "f5154752-5a5f-4c07-a119-35f49d95ccbb" 00:13:55.462 ], 00:13:55.462 "product_name": "Malloc disk", 00:13:55.462 "block_size": 512, 00:13:55.462 "num_blocks": 65536, 00:13:55.462 "uuid": "f5154752-5a5f-4c07-a119-35f49d95ccbb", 00:13:55.462 "assigned_rate_limits": { 00:13:55.462 "rw_ios_per_sec": 0, 00:13:55.462 "rw_mbytes_per_sec": 0, 00:13:55.462 "r_mbytes_per_sec": 0, 00:13:55.462 "w_mbytes_per_sec": 0 00:13:55.462 }, 00:13:55.462 "claimed": true, 00:13:55.462 "claim_type": "exclusive_write", 00:13:55.462 "zoned": false, 00:13:55.462 "supported_io_types": { 00:13:55.462 "read": true, 00:13:55.462 "write": true, 00:13:55.462 "unmap": true, 00:13:55.462 "write_zeroes": true, 00:13:55.462 "flush": true, 00:13:55.462 "reset": true, 00:13:55.462 "compare": false, 00:13:55.462 "compare_and_write": false, 00:13:55.462 "abort": true, 00:13:55.462 "nvme_admin": false, 00:13:55.462 "nvme_io": false 00:13:55.462 }, 00:13:55.462 "memory_domains": [ 00:13:55.462 { 00:13:55.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.462 "dma_device_type": 2 00:13:55.462 } 00:13:55.462 ], 00:13:55.462 "driver_specific": {} 00:13:55.462 } 00:13:55.462 ] 00:13:55.462 20:56:23 -- common/autotest_common.sh@895 -- # return 0 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.462 20:56:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.721 20:56:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.721 "name": "Existed_Raid", 00:13:55.721 "uuid": "d7197fd0-418c-4333-a32e-103c3e24ce5c", 00:13:55.721 "strip_size_kb": 64, 00:13:55.721 "state": "configuring", 00:13:55.721 "raid_level": "raid0", 00:13:55.721 "superblock": true, 00:13:55.721 "num_base_bdevs": 2, 00:13:55.721 "num_base_bdevs_discovered": 1, 00:13:55.721 "num_base_bdevs_operational": 2, 00:13:55.721 "base_bdevs_list": [ 00:13:55.721 { 00:13:55.721 "name": "BaseBdev1", 00:13:55.721 "uuid": "f5154752-5a5f-4c07-a119-35f49d95ccbb", 00:13:55.721 "is_configured": true, 00:13:55.721 "data_offset": 2048, 00:13:55.721 "data_size": 63488 00:13:55.721 }, 00:13:55.721 { 00:13:55.721 "name": "BaseBdev2", 00:13:55.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.721 "is_configured": false, 00:13:55.721 "data_offset": 0, 00:13:55.721 "data_size": 0 00:13:55.721 } 00:13:55.721 ] 00:13:55.721 }' 00:13:55.721 20:56:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.721 20:56:23 -- common/autotest_common.sh@10 -- # set +x 00:13:56.288 20:56:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:56.288 [2024-06-09 20:56:24.426485] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.288 [2024-06-09 20:56:24.426568] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:56.288 20:56:24 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:56.288 20:56:24 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:56.858 20:56:24 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.858 BaseBdev1 00:13:57.149 20:56:25 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:57.149 20:56:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:57.149 20:56:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:57.149 20:56:25 -- common/autotest_common.sh@889 -- # local i 00:13:57.149 20:56:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:57.149 20:56:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:57.149 20:56:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:57.149 20:56:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:57.407 [ 00:13:57.407 { 00:13:57.407 "name": "BaseBdev1", 00:13:57.407 "aliases": [ 00:13:57.407 "a599cf7f-a575-47ac-9770-d3abc0f5b9d8" 00:13:57.407 ], 00:13:57.407 "product_name": "Malloc disk", 00:13:57.407 "block_size": 512, 00:13:57.407 "num_blocks": 65536, 00:13:57.407 "uuid": "a599cf7f-a575-47ac-9770-d3abc0f5b9d8", 00:13:57.407 "assigned_rate_limits": { 00:13:57.407 "rw_ios_per_sec": 0, 00:13:57.407 "rw_mbytes_per_sec": 0, 00:13:57.407 "r_mbytes_per_sec": 0, 00:13:57.407 "w_mbytes_per_sec": 0 00:13:57.407 }, 00:13:57.407 "claimed": false, 00:13:57.407 "zoned": false, 00:13:57.407 "supported_io_types": { 00:13:57.407 "read": true, 00:13:57.407 "write": true, 00:13:57.407 "unmap": true, 00:13:57.407 "write_zeroes": true, 00:13:57.407 "flush": true, 00:13:57.407 "reset": true, 00:13:57.407 "compare": false, 00:13:57.407 "compare_and_write": false, 00:13:57.407 "abort": true, 00:13:57.407 "nvme_admin": false, 00:13:57.407 "nvme_io": false 00:13:57.407 }, 00:13:57.407 "memory_domains": [ 00:13:57.407 { 00:13:57.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.407 "dma_device_type": 2 00:13:57.407 } 00:13:57.407 ], 00:13:57.407 "driver_specific": {} 00:13:57.407 } 00:13:57.407 ] 00:13:57.407 20:56:25 -- common/autotest_common.sh@895 -- # return 0 00:13:57.407 20:56:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:57.665 [2024-06-09 20:56:25.603907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.665 [2024-06-09 20:56:25.605983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.666 [2024-06-09 20:56:25.606045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.666 20:56:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.924 20:56:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:57.924 "name": "Existed_Raid", 00:13:57.924 "uuid": "5b221c91-f867-4aa4-ad54-4a0a7c8f817a", 00:13:57.924 "strip_size_kb": 64, 00:13:57.924 "state": "configuring", 00:13:57.924 "raid_level": "raid0", 00:13:57.924 "superblock": true, 00:13:57.924 "num_base_bdevs": 2, 00:13:57.924 "num_base_bdevs_discovered": 1, 00:13:57.924 "num_base_bdevs_operational": 2, 00:13:57.924 "base_bdevs_list": [ 00:13:57.924 { 00:13:57.924 "name": "BaseBdev1", 00:13:57.924 "uuid": "a599cf7f-a575-47ac-9770-d3abc0f5b9d8", 00:13:57.924 "is_configured": true, 00:13:57.924 "data_offset": 2048, 00:13:57.924 "data_size": 63488 00:13:57.924 }, 00:13:57.924 { 00:13:57.924 "name": "BaseBdev2", 00:13:57.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.924 "is_configured": false, 00:13:57.924 "data_offset": 0, 00:13:57.924 "data_size": 0 00:13:57.924 } 00:13:57.924 ] 00:13:57.924 }' 00:13:57.924 20:56:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:57.924 20:56:25 -- common/autotest_common.sh@10 -- # set +x 00:13:58.489 20:56:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.748 [2024-06-09 20:56:26.797617] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.748 [2024-06-09 20:56:26.797850] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:13:58.748 [2024-06-09 20:56:26.797865] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:58.748 [2024-06-09 20:56:26.797985] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:58.748 [2024-06-09 20:56:26.798335] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:13:58.748 [2024-06-09 20:56:26.798356] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:13:58.748 [2024-06-09 20:56:26.798551] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.748 BaseBdev2 00:13:58.748 20:56:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:58.748 20:56:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:58.748 20:56:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:58.748 20:56:26 -- common/autotest_common.sh@889 -- # local i 00:13:58.748 20:56:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:58.748 20:56:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:58.748 20:56:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.006 20:56:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:59.265 [ 00:13:59.265 { 00:13:59.265 "name": "BaseBdev2", 00:13:59.265 "aliases": [ 00:13:59.265 "97fbcddd-0b86-4ba5-a537-62e7f5b3f64d" 00:13:59.265 ], 00:13:59.265 "product_name": "Malloc disk", 00:13:59.265 "block_size": 512, 00:13:59.265 "num_blocks": 65536, 00:13:59.265 "uuid": "97fbcddd-0b86-4ba5-a537-62e7f5b3f64d", 00:13:59.265 "assigned_rate_limits": { 00:13:59.265 "rw_ios_per_sec": 0, 00:13:59.265 "rw_mbytes_per_sec": 0, 00:13:59.265 "r_mbytes_per_sec": 0, 00:13:59.265 "w_mbytes_per_sec": 0 00:13:59.265 }, 00:13:59.265 "claimed": true, 00:13:59.265 "claim_type": "exclusive_write", 00:13:59.265 "zoned": false, 00:13:59.265 "supported_io_types": { 00:13:59.265 "read": true, 00:13:59.265 "write": true, 00:13:59.265 "unmap": true, 00:13:59.265 "write_zeroes": true, 00:13:59.265 "flush": true, 00:13:59.265 "reset": true, 00:13:59.265 "compare": false, 00:13:59.265 "compare_and_write": false, 00:13:59.265 "abort": true, 00:13:59.265 "nvme_admin": false, 00:13:59.265 "nvme_io": false 00:13:59.265 }, 00:13:59.265 "memory_domains": [ 00:13:59.265 { 00:13:59.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.265 "dma_device_type": 2 00:13:59.265 } 00:13:59.265 ], 00:13:59.265 "driver_specific": {} 00:13:59.265 } 00:13:59.265 ] 00:13:59.265 20:56:27 -- common/autotest_common.sh@895 -- # return 0 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.265 20:56:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.523 20:56:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:59.523 "name": "Existed_Raid", 00:13:59.523 "uuid": "5b221c91-f867-4aa4-ad54-4a0a7c8f817a", 00:13:59.523 "strip_size_kb": 64, 00:13:59.523 "state": "online", 00:13:59.523 "raid_level": "raid0", 00:13:59.523 "superblock": true, 00:13:59.523 "num_base_bdevs": 2, 00:13:59.523 "num_base_bdevs_discovered": 2, 00:13:59.523 "num_base_bdevs_operational": 2, 00:13:59.523 "base_bdevs_list": [ 00:13:59.523 { 00:13:59.523 "name": "BaseBdev1", 00:13:59.523 "uuid": "a599cf7f-a575-47ac-9770-d3abc0f5b9d8", 00:13:59.523 "is_configured": true, 00:13:59.523 "data_offset": 2048, 00:13:59.523 "data_size": 63488 00:13:59.523 }, 00:13:59.523 { 00:13:59.523 "name": "BaseBdev2", 00:13:59.523 "uuid": "97fbcddd-0b86-4ba5-a537-62e7f5b3f64d", 00:13:59.523 "is_configured": true, 00:13:59.523 "data_offset": 2048, 00:13:59.523 "data_size": 63488 00:13:59.523 } 00:13:59.523 ] 00:13:59.523 }' 00:13:59.523 20:56:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:59.523 20:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:00.090 20:56:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:00.348 [2024-06-09 20:56:28.382025] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.348 [2024-06-09 20:56:28.382052] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.348 [2024-06-09 20:56:28.382108] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.348 20:56:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.607 20:56:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.607 "name": "Existed_Raid", 00:14:00.607 "uuid": "5b221c91-f867-4aa4-ad54-4a0a7c8f817a", 00:14:00.607 "strip_size_kb": 64, 00:14:00.607 "state": "offline", 00:14:00.607 "raid_level": "raid0", 00:14:00.607 "superblock": true, 00:14:00.607 "num_base_bdevs": 2, 00:14:00.607 "num_base_bdevs_discovered": 1, 00:14:00.607 "num_base_bdevs_operational": 1, 00:14:00.607 "base_bdevs_list": [ 00:14:00.607 { 00:14:00.607 "name": null, 00:14:00.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.607 "is_configured": false, 00:14:00.607 "data_offset": 2048, 00:14:00.607 "data_size": 63488 00:14:00.607 }, 00:14:00.607 { 00:14:00.607 "name": "BaseBdev2", 00:14:00.607 "uuid": "97fbcddd-0b86-4ba5-a537-62e7f5b3f64d", 00:14:00.607 "is_configured": true, 00:14:00.607 "data_offset": 2048, 00:14:00.607 "data_size": 63488 00:14:00.607 } 00:14:00.607 ] 00:14:00.607 }' 00:14:00.607 20:56:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.607 20:56:28 -- common/autotest_common.sh@10 -- # set +x 00:14:01.543 20:56:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:01.543 20:56:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:01.543 20:56:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.543 20:56:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:01.543 20:56:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:01.543 20:56:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.543 20:56:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:01.802 [2024-06-09 20:56:29.825627] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.802 [2024-06-09 20:56:29.825722] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:14:01.802 20:56:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:01.802 20:56:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:01.802 20:56:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.802 20:56:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:02.061 20:56:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:02.061 20:56:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:02.061 20:56:30 -- bdev/bdev_raid.sh@287 -- # killprocess 111803 00:14:02.061 20:56:30 -- common/autotest_common.sh@926 -- # '[' -z 111803 ']' 00:14:02.061 20:56:30 -- common/autotest_common.sh@930 -- # kill -0 111803 00:14:02.061 20:56:30 -- common/autotest_common.sh@931 -- # uname 00:14:02.061 20:56:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:02.061 20:56:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111803 00:14:02.061 20:56:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:02.061 20:56:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:02.061 20:56:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111803' 00:14:02.061 killing process with pid 111803 00:14:02.061 20:56:30 -- common/autotest_common.sh@945 -- # kill 111803 00:14:02.061 20:56:30 -- common/autotest_common.sh@950 -- # wait 111803 00:14:02.061 [2024-06-09 20:56:30.183178] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.061 [2024-06-09 20:56:30.183296] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.440 ************************************ 00:14:03.440 END TEST raid_state_function_test_sb 00:14:03.440 ************************************ 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:03.440 00:14:03.440 real 0m11.062s 00:14:03.440 user 0m19.147s 00:14:03.440 sys 0m1.413s 00:14:03.440 20:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.440 20:56:31 -- common/autotest_common.sh@10 -- # set +x 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:03.440 20:56:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:03.440 20:56:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:03.440 20:56:31 -- common/autotest_common.sh@10 -- # set +x 00:14:03.440 ************************************ 00:14:03.440 START TEST raid_superblock_test 00:14:03.440 ************************************ 00:14:03.440 20:56:31 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@357 -- # raid_pid=112133 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@358 -- # waitforlisten 112133 /var/tmp/spdk-raid.sock 00:14:03.440 20:56:31 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:03.440 20:56:31 -- common/autotest_common.sh@819 -- # '[' -z 112133 ']' 00:14:03.440 20:56:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:03.440 20:56:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:03.440 20:56:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:03.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:03.440 20:56:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:03.440 20:56:31 -- common/autotest_common.sh@10 -- # set +x 00:14:03.440 [2024-06-09 20:56:31.337896] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:03.440 [2024-06-09 20:56:31.338089] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112133 ] 00:14:03.440 [2024-06-09 20:56:31.503921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.699 [2024-06-09 20:56:31.686864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.958 [2024-06-09 20:56:31.880100] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.217 20:56:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:04.217 20:56:32 -- common/autotest_common.sh@852 -- # return 0 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.217 20:56:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:04.475 malloc1 00:14:04.475 20:56:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.734 [2024-06-09 20:56:32.727869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.734 [2024-06-09 20:56:32.727981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.734 [2024-06-09 20:56:32.728023] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:04.734 [2024-06-09 20:56:32.728072] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.734 [2024-06-09 20:56:32.730321] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.734 [2024-06-09 20:56:32.730370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.734 pt1 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.734 20:56:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:04.992 malloc2 00:14:04.992 20:56:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.992 [2024-06-09 20:56:33.153297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.992 [2024-06-09 20:56:33.153365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.992 [2024-06-09 20:56:33.153414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:04.992 [2024-06-09 20:56:33.153469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.992 [2024-06-09 20:56:33.155710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.992 [2024-06-09 20:56:33.155759] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.992 pt2 00:14:04.992 20:56:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:04.992 20:56:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.992 20:56:33 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:05.251 [2024-06-09 20:56:33.341443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.251 [2024-06-09 20:56:33.343611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.251 [2024-06-09 20:56:33.343838] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:14:05.251 [2024-06-09 20:56:33.343853] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:05.251 [2024-06-09 20:56:33.343976] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:05.251 [2024-06-09 20:56:33.344322] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:14:05.251 [2024-06-09 20:56:33.344345] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:14:05.251 [2024-06-09 20:56:33.344478] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.251 20:56:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.509 20:56:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:05.509 "name": "raid_bdev1", 00:14:05.509 "uuid": "cf94a39b-b9fa-4c7d-81a6-5cec8f4c0812", 00:14:05.509 "strip_size_kb": 64, 00:14:05.509 "state": "online", 00:14:05.509 "raid_level": "raid0", 00:14:05.509 "superblock": true, 00:14:05.509 "num_base_bdevs": 2, 00:14:05.509 "num_base_bdevs_discovered": 2, 00:14:05.509 "num_base_bdevs_operational": 2, 00:14:05.509 "base_bdevs_list": [ 00:14:05.509 { 00:14:05.509 "name": "pt1", 00:14:05.509 "uuid": "7f6e2d5f-d335-59b9-81c4-1cf2f5bbe0d4", 00:14:05.509 "is_configured": true, 00:14:05.509 "data_offset": 2048, 00:14:05.509 "data_size": 63488 00:14:05.509 }, 00:14:05.509 { 00:14:05.509 "name": "pt2", 00:14:05.509 "uuid": "79447426-ae14-5beb-8f69-41a5aaebab6d", 00:14:05.509 "is_configured": true, 00:14:05.509 "data_offset": 2048, 00:14:05.509 "data_size": 63488 00:14:05.509 } 00:14:05.509 ] 00:14:05.509 }' 00:14:05.509 20:56:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:05.509 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:14:06.075 20:56:34 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:06.075 20:56:34 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:06.334 [2024-06-09 20:56:34.341691] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.334 20:56:34 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=cf94a39b-b9fa-4c7d-81a6-5cec8f4c0812 00:14:06.334 20:56:34 -- bdev/bdev_raid.sh@380 -- # '[' -z cf94a39b-b9fa-4c7d-81a6-5cec8f4c0812 ']' 00:14:06.334 20:56:34 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:06.592 [2024-06-09 20:56:34.589571] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.592 [2024-06-09 20:56:34.589594] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.592 [2024-06-09 20:56:34.589650] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.592 [2024-06-09 20:56:34.589691] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.592 [2024-06-09 20:56:34.589701] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:14:06.592 20:56:34 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:06.592 20:56:34 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.851 20:56:34 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:06.851 20:56:34 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:06.851 20:56:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.851 20:56:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:07.109 20:56:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.109 20:56:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:07.109 20:56:35 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:07.109 20:56:35 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:07.368 20:56:35 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:07.368 20:56:35 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:07.368 20:56:35 -- common/autotest_common.sh@640 -- # local es=0 00:14:07.368 20:56:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:07.368 20:56:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.368 20:56:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:07.368 20:56:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.368 20:56:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:07.368 20:56:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.368 20:56:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:07.368 20:56:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.368 20:56:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:07.368 20:56:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:07.627 [2024-06-09 20:56:35.637735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:07.627 [2024-06-09 20:56:35.639654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:07.627 [2024-06-09 20:56:35.639715] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:07.627 [2024-06-09 20:56:35.639783] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:07.627 [2024-06-09 20:56:35.639821] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.627 [2024-06-09 20:56:35.639831] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:14:07.627 request: 00:14:07.627 { 00:14:07.627 "name": "raid_bdev1", 00:14:07.627 "raid_level": "raid0", 00:14:07.627 "base_bdevs": [ 00:14:07.627 "malloc1", 00:14:07.627 "malloc2" 00:14:07.627 ], 00:14:07.627 "superblock": false, 00:14:07.627 "strip_size_kb": 64, 00:14:07.627 "method": "bdev_raid_create", 00:14:07.627 "req_id": 1 00:14:07.627 } 00:14:07.627 Got JSON-RPC error response 00:14:07.627 response: 00:14:07.627 { 00:14:07.627 "code": -17, 00:14:07.627 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:07.627 } 00:14:07.627 20:56:35 -- common/autotest_common.sh@643 -- # es=1 00:14:07.627 20:56:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:07.627 20:56:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:07.627 20:56:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:07.627 20:56:35 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:07.627 20:56:35 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.886 20:56:35 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:07.886 20:56:35 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:07.886 20:56:35 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:07.886 [2024-06-09 20:56:36.037764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:07.886 [2024-06-09 20:56:36.037856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.886 [2024-06-09 20:56:36.037891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:07.886 [2024-06-09 20:56:36.037918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.886 [2024-06-09 20:56:36.040272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.886 [2024-06-09 20:56:36.040328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:07.886 [2024-06-09 20:56:36.040418] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:07.886 [2024-06-09 20:56:36.040482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:07.886 pt1 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.886 20:56:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.145 20:56:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.145 "name": "raid_bdev1", 00:14:08.145 "uuid": "cf94a39b-b9fa-4c7d-81a6-5cec8f4c0812", 00:14:08.145 "strip_size_kb": 64, 00:14:08.145 "state": "configuring", 00:14:08.145 "raid_level": "raid0", 00:14:08.145 "superblock": true, 00:14:08.145 "num_base_bdevs": 2, 00:14:08.145 "num_base_bdevs_discovered": 1, 00:14:08.145 "num_base_bdevs_operational": 2, 00:14:08.145 "base_bdevs_list": [ 00:14:08.145 { 00:14:08.145 "name": "pt1", 00:14:08.145 "uuid": "7f6e2d5f-d335-59b9-81c4-1cf2f5bbe0d4", 00:14:08.145 "is_configured": true, 00:14:08.145 "data_offset": 2048, 00:14:08.145 "data_size": 63488 00:14:08.145 }, 00:14:08.145 { 00:14:08.145 "name": null, 00:14:08.145 "uuid": "79447426-ae14-5beb-8f69-41a5aaebab6d", 00:14:08.145 "is_configured": false, 00:14:08.145 "data_offset": 2048, 00:14:08.145 "data_size": 63488 00:14:08.145 } 00:14:08.145 ] 00:14:08.145 }' 00:14:08.145 20:56:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.145 20:56:36 -- common/autotest_common.sh@10 -- # set +x 00:14:08.713 20:56:36 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:08.713 20:56:36 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:08.713 20:56:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:08.713 20:56:36 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:08.982 [2024-06-09 20:56:37.069980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:08.982 [2024-06-09 20:56:37.070058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.982 [2024-06-09 20:56:37.070092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:08.982 [2024-06-09 20:56:37.070119] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.982 [2024-06-09 20:56:37.070475] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.982 [2024-06-09 20:56:37.070546] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:08.982 [2024-06-09 20:56:37.070624] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:08.982 [2024-06-09 20:56:37.070645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.982 [2024-06-09 20:56:37.070738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:14:08.982 [2024-06-09 20:56:37.070758] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:08.982 [2024-06-09 20:56:37.070857] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:08.982 [2024-06-09 20:56:37.071152] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:14:08.982 [2024-06-09 20:56:37.071173] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:14:08.982 [2024-06-09 20:56:37.071283] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.982 pt2 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.982 20:56:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.276 20:56:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.277 "name": "raid_bdev1", 00:14:09.277 "uuid": "cf94a39b-b9fa-4c7d-81a6-5cec8f4c0812", 00:14:09.277 "strip_size_kb": 64, 00:14:09.277 "state": "online", 00:14:09.277 "raid_level": "raid0", 00:14:09.277 "superblock": true, 00:14:09.277 "num_base_bdevs": 2, 00:14:09.277 "num_base_bdevs_discovered": 2, 00:14:09.277 "num_base_bdevs_operational": 2, 00:14:09.277 "base_bdevs_list": [ 00:14:09.277 { 00:14:09.277 "name": "pt1", 00:14:09.277 "uuid": "7f6e2d5f-d335-59b9-81c4-1cf2f5bbe0d4", 00:14:09.277 "is_configured": true, 00:14:09.277 "data_offset": 2048, 00:14:09.277 "data_size": 63488 00:14:09.277 }, 00:14:09.277 { 00:14:09.277 "name": "pt2", 00:14:09.277 "uuid": "79447426-ae14-5beb-8f69-41a5aaebab6d", 00:14:09.277 "is_configured": true, 00:14:09.277 "data_offset": 2048, 00:14:09.277 "data_size": 63488 00:14:09.277 } 00:14:09.277 ] 00:14:09.277 }' 00:14:09.277 20:56:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.277 20:56:37 -- common/autotest_common.sh@10 -- # set +x 00:14:09.851 20:56:37 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:09.851 20:56:37 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:10.110 [2024-06-09 20:56:38.126313] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.110 20:56:38 -- bdev/bdev_raid.sh@430 -- # '[' cf94a39b-b9fa-4c7d-81a6-5cec8f4c0812 '!=' cf94a39b-b9fa-4c7d-81a6-5cec8f4c0812 ']' 00:14:10.110 20:56:38 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:10.110 20:56:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:10.110 20:56:38 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:10.110 20:56:38 -- bdev/bdev_raid.sh@511 -- # killprocess 112133 00:14:10.110 20:56:38 -- common/autotest_common.sh@926 -- # '[' -z 112133 ']' 00:14:10.110 20:56:38 -- common/autotest_common.sh@930 -- # kill -0 112133 00:14:10.110 20:56:38 -- common/autotest_common.sh@931 -- # uname 00:14:10.110 20:56:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:10.110 20:56:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112133 00:14:10.110 20:56:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:10.110 20:56:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:10.110 20:56:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112133' 00:14:10.110 killing process with pid 112133 00:14:10.110 20:56:38 -- common/autotest_common.sh@945 -- # kill 112133 00:14:10.110 [2024-06-09 20:56:38.171864] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.110 [2024-06-09 20:56:38.171919] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.110 [2024-06-09 20:56:38.171956] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.110 [2024-06-09 20:56:38.171966] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:14:10.110 20:56:38 -- common/autotest_common.sh@950 -- # wait 112133 00:14:10.369 [2024-06-09 20:56:38.305670] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.305 ************************************ 00:14:11.305 END TEST raid_superblock_test 00:14:11.305 ************************************ 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:11.305 00:14:11.305 real 0m8.037s 00:14:11.305 user 0m13.563s 00:14:11.305 sys 0m1.050s 00:14:11.305 20:56:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.305 20:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:11.305 20:56:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:11.305 20:56:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:11.305 20:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:11.305 ************************************ 00:14:11.305 START TEST raid_state_function_test 00:14:11.305 ************************************ 00:14:11.305 20:56:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=112382 00:14:11.305 Process raid pid: 112382 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112382' 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 112382 /var/tmp/spdk-raid.sock 00:14:11.305 20:56:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:11.305 20:56:39 -- common/autotest_common.sh@819 -- # '[' -z 112382 ']' 00:14:11.305 20:56:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:11.305 20:56:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:11.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:11.305 20:56:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:11.305 20:56:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:11.305 20:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:11.305 [2024-06-09 20:56:39.434734] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:11.305 [2024-06-09 20:56:39.434961] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.564 [2024-06-09 20:56:39.603485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.823 [2024-06-09 20:56:39.790367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.823 [2024-06-09 20:56:39.978002] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.389 20:56:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.389 20:56:40 -- common/autotest_common.sh@852 -- # return 0 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:12.389 [2024-06-09 20:56:40.503282] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.389 [2024-06-09 20:56:40.503374] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.389 [2024-06-09 20:56:40.503395] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.389 [2024-06-09 20:56:40.503415] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.389 20:56:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.647 20:56:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.647 "name": "Existed_Raid", 00:14:12.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.647 "strip_size_kb": 64, 00:14:12.647 "state": "configuring", 00:14:12.647 "raid_level": "concat", 00:14:12.647 "superblock": false, 00:14:12.647 "num_base_bdevs": 2, 00:14:12.647 "num_base_bdevs_discovered": 0, 00:14:12.647 "num_base_bdevs_operational": 2, 00:14:12.647 "base_bdevs_list": [ 00:14:12.647 { 00:14:12.647 "name": "BaseBdev1", 00:14:12.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.647 "is_configured": false, 00:14:12.647 "data_offset": 0, 00:14:12.647 "data_size": 0 00:14:12.647 }, 00:14:12.647 { 00:14:12.647 "name": "BaseBdev2", 00:14:12.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.647 "is_configured": false, 00:14:12.647 "data_offset": 0, 00:14:12.647 "data_size": 0 00:14:12.647 } 00:14:12.647 ] 00:14:12.647 }' 00:14:12.647 20:56:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.647 20:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:13.212 20:56:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:13.469 [2024-06-09 20:56:41.535337] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:13.469 [2024-06-09 20:56:41.535375] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:13.469 20:56:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:13.727 [2024-06-09 20:56:41.783399] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.727 [2024-06-09 20:56:41.783468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.727 [2024-06-09 20:56:41.783480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:13.727 [2024-06-09 20:56:41.783504] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:13.727 20:56:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.985 [2024-06-09 20:56:42.009075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.985 BaseBdev1 00:14:13.985 20:56:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:13.985 20:56:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:13.985 20:56:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:13.985 20:56:42 -- common/autotest_common.sh@889 -- # local i 00:14:13.985 20:56:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:13.985 20:56:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:13.985 20:56:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.242 20:56:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:14.500 [ 00:14:14.500 { 00:14:14.500 "name": "BaseBdev1", 00:14:14.500 "aliases": [ 00:14:14.500 "b20e988e-7975-4345-beed-3899f2b31936" 00:14:14.500 ], 00:14:14.500 "product_name": "Malloc disk", 00:14:14.500 "block_size": 512, 00:14:14.500 "num_blocks": 65536, 00:14:14.500 "uuid": "b20e988e-7975-4345-beed-3899f2b31936", 00:14:14.500 "assigned_rate_limits": { 00:14:14.500 "rw_ios_per_sec": 0, 00:14:14.500 "rw_mbytes_per_sec": 0, 00:14:14.500 "r_mbytes_per_sec": 0, 00:14:14.500 "w_mbytes_per_sec": 0 00:14:14.500 }, 00:14:14.500 "claimed": true, 00:14:14.500 "claim_type": "exclusive_write", 00:14:14.500 "zoned": false, 00:14:14.500 "supported_io_types": { 00:14:14.500 "read": true, 00:14:14.500 "write": true, 00:14:14.500 "unmap": true, 00:14:14.500 "write_zeroes": true, 00:14:14.500 "flush": true, 00:14:14.500 "reset": true, 00:14:14.500 "compare": false, 00:14:14.500 "compare_and_write": false, 00:14:14.500 "abort": true, 00:14:14.500 "nvme_admin": false, 00:14:14.500 "nvme_io": false 00:14:14.500 }, 00:14:14.500 "memory_domains": [ 00:14:14.500 { 00:14:14.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.500 "dma_device_type": 2 00:14:14.500 } 00:14:14.500 ], 00:14:14.500 "driver_specific": {} 00:14:14.500 } 00:14:14.500 ] 00:14:14.500 20:56:42 -- common/autotest_common.sh@895 -- # return 0 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:14.500 "name": "Existed_Raid", 00:14:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.500 "strip_size_kb": 64, 00:14:14.500 "state": "configuring", 00:14:14.500 "raid_level": "concat", 00:14:14.500 "superblock": false, 00:14:14.500 "num_base_bdevs": 2, 00:14:14.500 "num_base_bdevs_discovered": 1, 00:14:14.500 "num_base_bdevs_operational": 2, 00:14:14.500 "base_bdevs_list": [ 00:14:14.500 { 00:14:14.500 "name": "BaseBdev1", 00:14:14.500 "uuid": "b20e988e-7975-4345-beed-3899f2b31936", 00:14:14.500 "is_configured": true, 00:14:14.500 "data_offset": 0, 00:14:14.500 "data_size": 65536 00:14:14.500 }, 00:14:14.500 { 00:14:14.500 "name": "BaseBdev2", 00:14:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.500 "is_configured": false, 00:14:14.500 "data_offset": 0, 00:14:14.500 "data_size": 0 00:14:14.500 } 00:14:14.500 ] 00:14:14.500 }' 00:14:14.500 20:56:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:14.500 20:56:42 -- common/autotest_common.sh@10 -- # set +x 00:14:15.434 20:56:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:15.434 [2024-06-09 20:56:43.497339] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.434 [2024-06-09 20:56:43.497374] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:15.434 20:56:43 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:15.434 20:56:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.692 [2024-06-09 20:56:43.685412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.692 [2024-06-09 20:56:43.687429] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.692 [2024-06-09 20:56:43.687484] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.692 20:56:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.951 20:56:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.951 "name": "Existed_Raid", 00:14:15.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.951 "strip_size_kb": 64, 00:14:15.951 "state": "configuring", 00:14:15.951 "raid_level": "concat", 00:14:15.951 "superblock": false, 00:14:15.951 "num_base_bdevs": 2, 00:14:15.951 "num_base_bdevs_discovered": 1, 00:14:15.951 "num_base_bdevs_operational": 2, 00:14:15.951 "base_bdevs_list": [ 00:14:15.951 { 00:14:15.951 "name": "BaseBdev1", 00:14:15.951 "uuid": "b20e988e-7975-4345-beed-3899f2b31936", 00:14:15.951 "is_configured": true, 00:14:15.951 "data_offset": 0, 00:14:15.951 "data_size": 65536 00:14:15.951 }, 00:14:15.951 { 00:14:15.951 "name": "BaseBdev2", 00:14:15.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.951 "is_configured": false, 00:14:15.951 "data_offset": 0, 00:14:15.951 "data_size": 0 00:14:15.951 } 00:14:15.951 ] 00:14:15.951 }' 00:14:15.951 20:56:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.951 20:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:16.517 20:56:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:16.776 [2024-06-09 20:56:44.848778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.776 [2024-06-09 20:56:44.848819] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:16.776 [2024-06-09 20:56:44.848828] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:16.776 [2024-06-09 20:56:44.848943] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:16.776 [2024-06-09 20:56:44.849305] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:16.776 [2024-06-09 20:56:44.849326] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:16.776 [2024-06-09 20:56:44.849628] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.776 BaseBdev2 00:14:16.776 20:56:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:16.776 20:56:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:16.776 20:56:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:16.776 20:56:44 -- common/autotest_common.sh@889 -- # local i 00:14:16.776 20:56:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:16.776 20:56:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:16.776 20:56:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.034 20:56:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.293 [ 00:14:17.293 { 00:14:17.293 "name": "BaseBdev2", 00:14:17.293 "aliases": [ 00:14:17.293 "50a93f8b-3303-4524-bb0b-078652e2b3f4" 00:14:17.293 ], 00:14:17.293 "product_name": "Malloc disk", 00:14:17.293 "block_size": 512, 00:14:17.293 "num_blocks": 65536, 00:14:17.293 "uuid": "50a93f8b-3303-4524-bb0b-078652e2b3f4", 00:14:17.293 "assigned_rate_limits": { 00:14:17.293 "rw_ios_per_sec": 0, 00:14:17.293 "rw_mbytes_per_sec": 0, 00:14:17.293 "r_mbytes_per_sec": 0, 00:14:17.293 "w_mbytes_per_sec": 0 00:14:17.293 }, 00:14:17.293 "claimed": true, 00:14:17.293 "claim_type": "exclusive_write", 00:14:17.293 "zoned": false, 00:14:17.293 "supported_io_types": { 00:14:17.293 "read": true, 00:14:17.293 "write": true, 00:14:17.293 "unmap": true, 00:14:17.293 "write_zeroes": true, 00:14:17.293 "flush": true, 00:14:17.293 "reset": true, 00:14:17.293 "compare": false, 00:14:17.293 "compare_and_write": false, 00:14:17.293 "abort": true, 00:14:17.293 "nvme_admin": false, 00:14:17.293 "nvme_io": false 00:14:17.293 }, 00:14:17.293 "memory_domains": [ 00:14:17.293 { 00:14:17.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.293 "dma_device_type": 2 00:14:17.293 } 00:14:17.293 ], 00:14:17.293 "driver_specific": {} 00:14:17.293 } 00:14:17.293 ] 00:14:17.293 20:56:45 -- common/autotest_common.sh@895 -- # return 0 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.293 20:56:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:17.293 "name": "Existed_Raid", 00:14:17.293 "uuid": "7913dca4-ad12-4118-b125-1034eccce4f1", 00:14:17.293 "strip_size_kb": 64, 00:14:17.293 "state": "online", 00:14:17.293 "raid_level": "concat", 00:14:17.293 "superblock": false, 00:14:17.293 "num_base_bdevs": 2, 00:14:17.293 "num_base_bdevs_discovered": 2, 00:14:17.293 "num_base_bdevs_operational": 2, 00:14:17.293 "base_bdevs_list": [ 00:14:17.293 { 00:14:17.293 "name": "BaseBdev1", 00:14:17.293 "uuid": "b20e988e-7975-4345-beed-3899f2b31936", 00:14:17.293 "is_configured": true, 00:14:17.293 "data_offset": 0, 00:14:17.293 "data_size": 65536 00:14:17.293 }, 00:14:17.294 { 00:14:17.294 "name": "BaseBdev2", 00:14:17.294 "uuid": "50a93f8b-3303-4524-bb0b-078652e2b3f4", 00:14:17.294 "is_configured": true, 00:14:17.294 "data_offset": 0, 00:14:17.294 "data_size": 65536 00:14:17.294 } 00:14:17.294 ] 00:14:17.294 }' 00:14:17.294 20:56:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:17.294 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:14:17.861 20:56:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:18.120 [2024-06-09 20:56:46.193101] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.120 [2024-06-09 20:56:46.193126] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.120 [2024-06-09 20:56:46.193181] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.120 20:56:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.379 20:56:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.379 "name": "Existed_Raid", 00:14:18.379 "uuid": "7913dca4-ad12-4118-b125-1034eccce4f1", 00:14:18.379 "strip_size_kb": 64, 00:14:18.379 "state": "offline", 00:14:18.379 "raid_level": "concat", 00:14:18.379 "superblock": false, 00:14:18.379 "num_base_bdevs": 2, 00:14:18.379 "num_base_bdevs_discovered": 1, 00:14:18.379 "num_base_bdevs_operational": 1, 00:14:18.379 "base_bdevs_list": [ 00:14:18.379 { 00:14:18.379 "name": null, 00:14:18.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.379 "is_configured": false, 00:14:18.379 "data_offset": 0, 00:14:18.379 "data_size": 65536 00:14:18.379 }, 00:14:18.379 { 00:14:18.379 "name": "BaseBdev2", 00:14:18.379 "uuid": "50a93f8b-3303-4524-bb0b-078652e2b3f4", 00:14:18.379 "is_configured": true, 00:14:18.379 "data_offset": 0, 00:14:18.379 "data_size": 65536 00:14:18.379 } 00:14:18.379 ] 00:14:18.379 }' 00:14:18.379 20:56:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.379 20:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:18.946 20:56:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:18.946 20:56:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:18.946 20:56:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.946 20:56:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:19.205 20:56:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:19.205 20:56:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.205 20:56:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:19.463 [2024-06-09 20:56:47.483696] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.463 [2024-06-09 20:56:47.483759] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:19.463 20:56:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:19.463 20:56:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:19.463 20:56:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.463 20:56:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:19.721 20:56:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:19.721 20:56:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:19.721 20:56:47 -- bdev/bdev_raid.sh@287 -- # killprocess 112382 00:14:19.721 20:56:47 -- common/autotest_common.sh@926 -- # '[' -z 112382 ']' 00:14:19.721 20:56:47 -- common/autotest_common.sh@930 -- # kill -0 112382 00:14:19.721 20:56:47 -- common/autotest_common.sh@931 -- # uname 00:14:19.721 20:56:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:19.721 20:56:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112382 00:14:19.721 20:56:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:19.721 20:56:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:19.721 20:56:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112382' 00:14:19.721 killing process with pid 112382 00:14:19.721 20:56:47 -- common/autotest_common.sh@945 -- # kill 112382 00:14:19.721 20:56:47 -- common/autotest_common.sh@950 -- # wait 112382 00:14:19.721 [2024-06-09 20:56:47.845365] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.721 [2024-06-09 20:56:47.845481] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.096 ************************************ 00:14:21.096 END TEST raid_state_function_test 00:14:21.096 ************************************ 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:21.096 00:14:21.096 real 0m9.500s 00:14:21.096 user 0m16.441s 00:14:21.096 sys 0m1.125s 00:14:21.096 20:56:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.096 20:56:48 -- common/autotest_common.sh@10 -- # set +x 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:21.096 20:56:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:21.096 20:56:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.096 20:56:48 -- common/autotest_common.sh@10 -- # set +x 00:14:21.096 ************************************ 00:14:21.096 START TEST raid_state_function_test_sb 00:14:21.096 ************************************ 00:14:21.096 20:56:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=112695 00:14:21.096 Process raid pid: 112695 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112695' 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 112695 /var/tmp/spdk-raid.sock 00:14:21.096 20:56:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:21.096 20:56:48 -- common/autotest_common.sh@819 -- # '[' -z 112695 ']' 00:14:21.096 20:56:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:21.096 20:56:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:21.096 20:56:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:21.096 20:56:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.096 20:56:48 -- common/autotest_common.sh@10 -- # set +x 00:14:21.096 [2024-06-09 20:56:48.991470] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:21.096 [2024-06-09 20:56:48.991675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.096 [2024-06-09 20:56:49.155523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.355 [2024-06-09 20:56:49.336890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.355 [2024-06-09 20:56:49.528905] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.921 20:56:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:21.921 20:56:49 -- common/autotest_common.sh@852 -- # return 0 00:14:21.921 20:56:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:21.921 [2024-06-09 20:56:50.062137] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.921 [2024-06-09 20:56:50.062228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.921 [2024-06-09 20:56:50.062241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.921 [2024-06-09 20:56:50.062261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.921 20:56:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.180 20:56:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.180 "name": "Existed_Raid", 00:14:22.180 "uuid": "fdfe48e1-b6e0-4e97-9f42-2685e92c8308", 00:14:22.180 "strip_size_kb": 64, 00:14:22.180 "state": "configuring", 00:14:22.180 "raid_level": "concat", 00:14:22.180 "superblock": true, 00:14:22.180 "num_base_bdevs": 2, 00:14:22.180 "num_base_bdevs_discovered": 0, 00:14:22.180 "num_base_bdevs_operational": 2, 00:14:22.180 "base_bdevs_list": [ 00:14:22.180 { 00:14:22.180 "name": "BaseBdev1", 00:14:22.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.180 "is_configured": false, 00:14:22.180 "data_offset": 0, 00:14:22.180 "data_size": 0 00:14:22.180 }, 00:14:22.180 { 00:14:22.180 "name": "BaseBdev2", 00:14:22.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.180 "is_configured": false, 00:14:22.180 "data_offset": 0, 00:14:22.180 "data_size": 0 00:14:22.180 } 00:14:22.180 ] 00:14:22.180 }' 00:14:22.180 20:56:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.180 20:56:50 -- common/autotest_common.sh@10 -- # set +x 00:14:23.114 20:56:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.114 [2024-06-09 20:56:51.187237] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.114 [2024-06-09 20:56:51.187282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:23.114 20:56:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:23.373 [2024-06-09 20:56:51.427309] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.373 [2024-06-09 20:56:51.427381] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.373 [2024-06-09 20:56:51.427402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.373 [2024-06-09 20:56:51.427426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.373 20:56:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.632 [2024-06-09 20:56:51.649034] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.632 BaseBdev1 00:14:23.632 20:56:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:23.632 20:56:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:23.632 20:56:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:23.632 20:56:51 -- common/autotest_common.sh@889 -- # local i 00:14:23.632 20:56:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:23.632 20:56:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:23.632 20:56:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:23.891 20:56:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.891 [ 00:14:23.891 { 00:14:23.891 "name": "BaseBdev1", 00:14:23.891 "aliases": [ 00:14:23.891 "1dc049b5-3735-4bd9-a286-22f3a0bfeefb" 00:14:23.891 ], 00:14:23.891 "product_name": "Malloc disk", 00:14:23.891 "block_size": 512, 00:14:23.891 "num_blocks": 65536, 00:14:23.891 "uuid": "1dc049b5-3735-4bd9-a286-22f3a0bfeefb", 00:14:23.891 "assigned_rate_limits": { 00:14:23.891 "rw_ios_per_sec": 0, 00:14:23.891 "rw_mbytes_per_sec": 0, 00:14:23.891 "r_mbytes_per_sec": 0, 00:14:23.891 "w_mbytes_per_sec": 0 00:14:23.891 }, 00:14:23.891 "claimed": true, 00:14:23.891 "claim_type": "exclusive_write", 00:14:23.891 "zoned": false, 00:14:23.891 "supported_io_types": { 00:14:23.891 "read": true, 00:14:23.891 "write": true, 00:14:23.891 "unmap": true, 00:14:23.891 "write_zeroes": true, 00:14:23.891 "flush": true, 00:14:23.891 "reset": true, 00:14:23.891 "compare": false, 00:14:23.891 "compare_and_write": false, 00:14:23.891 "abort": true, 00:14:23.891 "nvme_admin": false, 00:14:23.891 "nvme_io": false 00:14:23.891 }, 00:14:23.891 "memory_domains": [ 00:14:23.891 { 00:14:23.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.891 "dma_device_type": 2 00:14:23.891 } 00:14:23.891 ], 00:14:23.891 "driver_specific": {} 00:14:23.891 } 00:14:23.891 ] 00:14:23.891 20:56:52 -- common/autotest_common.sh@895 -- # return 0 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.891 20:56:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.183 20:56:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.183 "name": "Existed_Raid", 00:14:24.183 "uuid": "95468464-070f-4398-80ab-c81e67d06955", 00:14:24.183 "strip_size_kb": 64, 00:14:24.183 "state": "configuring", 00:14:24.183 "raid_level": "concat", 00:14:24.183 "superblock": true, 00:14:24.183 "num_base_bdevs": 2, 00:14:24.183 "num_base_bdevs_discovered": 1, 00:14:24.183 "num_base_bdevs_operational": 2, 00:14:24.183 "base_bdevs_list": [ 00:14:24.183 { 00:14:24.183 "name": "BaseBdev1", 00:14:24.183 "uuid": "1dc049b5-3735-4bd9-a286-22f3a0bfeefb", 00:14:24.183 "is_configured": true, 00:14:24.183 "data_offset": 2048, 00:14:24.183 "data_size": 63488 00:14:24.183 }, 00:14:24.183 { 00:14:24.183 "name": "BaseBdev2", 00:14:24.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.183 "is_configured": false, 00:14:24.183 "data_offset": 0, 00:14:24.183 "data_size": 0 00:14:24.183 } 00:14:24.183 ] 00:14:24.183 }' 00:14:24.183 20:56:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.183 20:56:52 -- common/autotest_common.sh@10 -- # set +x 00:14:24.765 20:56:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.024 [2024-06-09 20:56:53.189306] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.024 [2024-06-09 20:56:53.189344] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:25.283 20:56:53 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:25.283 20:56:53 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:25.541 20:56:53 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:25.800 BaseBdev1 00:14:25.800 20:56:53 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:25.800 20:56:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:25.800 20:56:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:25.800 20:56:53 -- common/autotest_common.sh@889 -- # local i 00:14:25.800 20:56:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:25.800 20:56:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:25.800 20:56:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:25.800 20:56:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.059 [ 00:14:26.059 { 00:14:26.059 "name": "BaseBdev1", 00:14:26.059 "aliases": [ 00:14:26.059 "af0f64bb-4ad1-49ee-8610-358943d34ecc" 00:14:26.059 ], 00:14:26.059 "product_name": "Malloc disk", 00:14:26.059 "block_size": 512, 00:14:26.059 "num_blocks": 65536, 00:14:26.059 "uuid": "af0f64bb-4ad1-49ee-8610-358943d34ecc", 00:14:26.059 "assigned_rate_limits": { 00:14:26.059 "rw_ios_per_sec": 0, 00:14:26.059 "rw_mbytes_per_sec": 0, 00:14:26.059 "r_mbytes_per_sec": 0, 00:14:26.059 "w_mbytes_per_sec": 0 00:14:26.060 }, 00:14:26.060 "claimed": false, 00:14:26.060 "zoned": false, 00:14:26.060 "supported_io_types": { 00:14:26.060 "read": true, 00:14:26.060 "write": true, 00:14:26.060 "unmap": true, 00:14:26.060 "write_zeroes": true, 00:14:26.060 "flush": true, 00:14:26.060 "reset": true, 00:14:26.060 "compare": false, 00:14:26.060 "compare_and_write": false, 00:14:26.060 "abort": true, 00:14:26.060 "nvme_admin": false, 00:14:26.060 "nvme_io": false 00:14:26.060 }, 00:14:26.060 "memory_domains": [ 00:14:26.060 { 00:14:26.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.060 "dma_device_type": 2 00:14:26.060 } 00:14:26.060 ], 00:14:26.060 "driver_specific": {} 00:14:26.060 } 00:14:26.060 ] 00:14:26.060 20:56:54 -- common/autotest_common.sh@895 -- # return 0 00:14:26.060 20:56:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:26.319 [2024-06-09 20:56:54.346119] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.319 [2024-06-09 20:56:54.348030] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.319 [2024-06-09 20:56:54.348092] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.319 20:56:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.578 20:56:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:26.578 "name": "Existed_Raid", 00:14:26.578 "uuid": "da99883a-96fb-4540-beea-4340dc3cf6fb", 00:14:26.578 "strip_size_kb": 64, 00:14:26.578 "state": "configuring", 00:14:26.578 "raid_level": "concat", 00:14:26.578 "superblock": true, 00:14:26.578 "num_base_bdevs": 2, 00:14:26.578 "num_base_bdevs_discovered": 1, 00:14:26.578 "num_base_bdevs_operational": 2, 00:14:26.578 "base_bdevs_list": [ 00:14:26.578 { 00:14:26.578 "name": "BaseBdev1", 00:14:26.578 "uuid": "af0f64bb-4ad1-49ee-8610-358943d34ecc", 00:14:26.578 "is_configured": true, 00:14:26.578 "data_offset": 2048, 00:14:26.578 "data_size": 63488 00:14:26.578 }, 00:14:26.578 { 00:14:26.578 "name": "BaseBdev2", 00:14:26.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.578 "is_configured": false, 00:14:26.578 "data_offset": 0, 00:14:26.578 "data_size": 0 00:14:26.578 } 00:14:26.578 ] 00:14:26.578 }' 00:14:26.578 20:56:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:26.578 20:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:27.144 20:56:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.403 [2024-06-09 20:56:55.469045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.403 [2024-06-09 20:56:55.469258] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:14:27.403 [2024-06-09 20:56:55.469273] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:27.403 [2024-06-09 20:56:55.469380] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:27.403 [2024-06-09 20:56:55.469774] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:14:27.403 [2024-06-09 20:56:55.469796] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:14:27.403 [2024-06-09 20:56:55.469928] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.403 BaseBdev2 00:14:27.403 20:56:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:27.403 20:56:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:27.403 20:56:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:27.403 20:56:55 -- common/autotest_common.sh@889 -- # local i 00:14:27.403 20:56:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:27.403 20:56:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:27.403 20:56:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.662 20:56:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.920 [ 00:14:27.920 { 00:14:27.920 "name": "BaseBdev2", 00:14:27.920 "aliases": [ 00:14:27.920 "6a2ba365-f893-4f8c-8cc0-3e2d69370332" 00:14:27.920 ], 00:14:27.920 "product_name": "Malloc disk", 00:14:27.920 "block_size": 512, 00:14:27.920 "num_blocks": 65536, 00:14:27.920 "uuid": "6a2ba365-f893-4f8c-8cc0-3e2d69370332", 00:14:27.920 "assigned_rate_limits": { 00:14:27.920 "rw_ios_per_sec": 0, 00:14:27.920 "rw_mbytes_per_sec": 0, 00:14:27.920 "r_mbytes_per_sec": 0, 00:14:27.920 "w_mbytes_per_sec": 0 00:14:27.920 }, 00:14:27.920 "claimed": true, 00:14:27.920 "claim_type": "exclusive_write", 00:14:27.920 "zoned": false, 00:14:27.920 "supported_io_types": { 00:14:27.920 "read": true, 00:14:27.920 "write": true, 00:14:27.921 "unmap": true, 00:14:27.921 "write_zeroes": true, 00:14:27.921 "flush": true, 00:14:27.921 "reset": true, 00:14:27.921 "compare": false, 00:14:27.921 "compare_and_write": false, 00:14:27.921 "abort": true, 00:14:27.921 "nvme_admin": false, 00:14:27.921 "nvme_io": false 00:14:27.921 }, 00:14:27.921 "memory_domains": [ 00:14:27.921 { 00:14:27.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.921 "dma_device_type": 2 00:14:27.921 } 00:14:27.921 ], 00:14:27.921 "driver_specific": {} 00:14:27.921 } 00:14:27.921 ] 00:14:27.921 20:56:55 -- common/autotest_common.sh@895 -- # return 0 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.921 20:56:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.179 20:56:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.179 "name": "Existed_Raid", 00:14:28.179 "uuid": "da99883a-96fb-4540-beea-4340dc3cf6fb", 00:14:28.179 "strip_size_kb": 64, 00:14:28.179 "state": "online", 00:14:28.179 "raid_level": "concat", 00:14:28.179 "superblock": true, 00:14:28.179 "num_base_bdevs": 2, 00:14:28.179 "num_base_bdevs_discovered": 2, 00:14:28.179 "num_base_bdevs_operational": 2, 00:14:28.179 "base_bdevs_list": [ 00:14:28.179 { 00:14:28.179 "name": "BaseBdev1", 00:14:28.179 "uuid": "af0f64bb-4ad1-49ee-8610-358943d34ecc", 00:14:28.179 "is_configured": true, 00:14:28.179 "data_offset": 2048, 00:14:28.179 "data_size": 63488 00:14:28.179 }, 00:14:28.179 { 00:14:28.179 "name": "BaseBdev2", 00:14:28.179 "uuid": "6a2ba365-f893-4f8c-8cc0-3e2d69370332", 00:14:28.179 "is_configured": true, 00:14:28.179 "data_offset": 2048, 00:14:28.179 "data_size": 63488 00:14:28.179 } 00:14:28.179 ] 00:14:28.179 }' 00:14:28.179 20:56:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.179 20:56:56 -- common/autotest_common.sh@10 -- # set +x 00:14:28.746 20:56:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:29.005 [2024-06-09 20:56:56.993241] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.005 [2024-06-09 20:56:56.993271] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.005 [2024-06-09 20:56:56.993324] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.005 20:56:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.263 20:56:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.264 "name": "Existed_Raid", 00:14:29.264 "uuid": "da99883a-96fb-4540-beea-4340dc3cf6fb", 00:14:29.264 "strip_size_kb": 64, 00:14:29.264 "state": "offline", 00:14:29.264 "raid_level": "concat", 00:14:29.264 "superblock": true, 00:14:29.264 "num_base_bdevs": 2, 00:14:29.264 "num_base_bdevs_discovered": 1, 00:14:29.264 "num_base_bdevs_operational": 1, 00:14:29.264 "base_bdevs_list": [ 00:14:29.264 { 00:14:29.264 "name": null, 00:14:29.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.264 "is_configured": false, 00:14:29.264 "data_offset": 2048, 00:14:29.264 "data_size": 63488 00:14:29.264 }, 00:14:29.264 { 00:14:29.264 "name": "BaseBdev2", 00:14:29.264 "uuid": "6a2ba365-f893-4f8c-8cc0-3e2d69370332", 00:14:29.264 "is_configured": true, 00:14:29.264 "data_offset": 2048, 00:14:29.264 "data_size": 63488 00:14:29.264 } 00:14:29.264 ] 00:14:29.264 }' 00:14:29.264 20:56:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.264 20:56:57 -- common/autotest_common.sh@10 -- # set +x 00:14:29.830 20:56:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:29.830 20:56:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:29.830 20:56:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.830 20:56:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:30.088 20:56:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:30.088 20:56:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:30.088 20:56:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:30.346 [2024-06-09 20:56:58.411564] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.346 [2024-06-09 20:56:58.411650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:14:30.346 20:56:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:30.346 20:56:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:30.346 20:56:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.346 20:56:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:30.605 20:56:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:30.605 20:56:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:30.605 20:56:58 -- bdev/bdev_raid.sh@287 -- # killprocess 112695 00:14:30.605 20:56:58 -- common/autotest_common.sh@926 -- # '[' -z 112695 ']' 00:14:30.605 20:56:58 -- common/autotest_common.sh@930 -- # kill -0 112695 00:14:30.605 20:56:58 -- common/autotest_common.sh@931 -- # uname 00:14:30.605 20:56:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:30.605 20:56:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112695 00:14:30.605 20:56:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:30.605 20:56:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:30.605 20:56:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112695' 00:14:30.605 killing process with pid 112695 00:14:30.605 20:56:58 -- common/autotest_common.sh@945 -- # kill 112695 00:14:30.605 [2024-06-09 20:56:58.714670] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.605 [2024-06-09 20:56:58.714789] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.605 20:56:58 -- common/autotest_common.sh@950 -- # wait 112695 00:14:31.978 ************************************ 00:14:31.978 END TEST raid_state_function_test_sb 00:14:31.978 ************************************ 00:14:31.978 20:56:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:31.978 00:14:31.978 real 0m10.806s 00:14:31.978 user 0m18.895s 00:14:31.978 sys 0m1.211s 00:14:31.978 20:56:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.978 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:31.978 20:56:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:31.978 20:56:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:31.979 20:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:31.979 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:31.979 ************************************ 00:14:31.979 START TEST raid_superblock_test 00:14:31.979 ************************************ 00:14:31.979 20:56:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=113027 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:31.979 20:56:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 113027 /var/tmp/spdk-raid.sock 00:14:31.979 20:56:59 -- common/autotest_common.sh@819 -- # '[' -z 113027 ']' 00:14:31.979 20:56:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.979 20:56:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:31.979 20:56:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.979 20:56:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:31.979 20:56:59 -- common/autotest_common.sh@10 -- # set +x 00:14:31.979 [2024-06-09 20:56:59.846401] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:31.979 [2024-06-09 20:56:59.846593] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113027 ] 00:14:31.979 [2024-06-09 20:57:00.018879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.237 [2024-06-09 20:57:00.237597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.495 [2024-06-09 20:57:00.423119] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.753 20:57:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:32.753 20:57:00 -- common/autotest_common.sh@852 -- # return 0 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:32.753 20:57:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:33.011 malloc1 00:14:33.011 20:57:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.269 [2024-06-09 20:57:01.195533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.269 [2024-06-09 20:57:01.195623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.269 [2024-06-09 20:57:01.195662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:33.269 [2024-06-09 20:57:01.195712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.269 [2024-06-09 20:57:01.198000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.269 [2024-06-09 20:57:01.198050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.269 pt1 00:14:33.269 20:57:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:33.269 20:57:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.269 20:57:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:33.270 20:57:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:33.270 20:57:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:33.270 20:57:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.270 20:57:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.270 20:57:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.270 20:57:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:33.527 malloc2 00:14:33.527 20:57:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.527 [2024-06-09 20:57:01.648773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:33.527 [2024-06-09 20:57:01.648833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.527 [2024-06-09 20:57:01.648873] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:33.527 [2024-06-09 20:57:01.648929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.527 [2024-06-09 20:57:01.651169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.527 [2024-06-09 20:57:01.651217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:33.527 pt2 00:14:33.527 20:57:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:33.527 20:57:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:33.527 20:57:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:33.785 [2024-06-09 20:57:01.828885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:33.785 [2024-06-09 20:57:01.830867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.785 [2024-06-09 20:57:01.831074] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:14:33.785 [2024-06-09 20:57:01.831090] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.785 [2024-06-09 20:57:01.831191] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:33.785 [2024-06-09 20:57:01.831529] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:14:33.785 [2024-06-09 20:57:01.831551] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:14:33.785 [2024-06-09 20:57:01.831688] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.785 20:57:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.044 20:57:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.044 "name": "raid_bdev1", 00:14:34.044 "uuid": "fc9fc0c2-2589-4f83-b720-354616e0aec2", 00:14:34.044 "strip_size_kb": 64, 00:14:34.044 "state": "online", 00:14:34.044 "raid_level": "concat", 00:14:34.044 "superblock": true, 00:14:34.044 "num_base_bdevs": 2, 00:14:34.044 "num_base_bdevs_discovered": 2, 00:14:34.044 "num_base_bdevs_operational": 2, 00:14:34.044 "base_bdevs_list": [ 00:14:34.044 { 00:14:34.044 "name": "pt1", 00:14:34.044 "uuid": "cde2bfa2-e531-5731-8c17-09b35758c273", 00:14:34.044 "is_configured": true, 00:14:34.044 "data_offset": 2048, 00:14:34.044 "data_size": 63488 00:14:34.044 }, 00:14:34.044 { 00:14:34.044 "name": "pt2", 00:14:34.044 "uuid": "c7f2c3be-99a4-5834-aa11-8b2e3aa5fd7b", 00:14:34.044 "is_configured": true, 00:14:34.044 "data_offset": 2048, 00:14:34.044 "data_size": 63488 00:14:34.044 } 00:14:34.044 ] 00:14:34.044 }' 00:14:34.044 20:57:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.044 20:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:34.609 20:57:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:34.609 20:57:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:34.867 [2024-06-09 20:57:02.889249] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.867 20:57:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fc9fc0c2-2589-4f83-b720-354616e0aec2 00:14:34.867 20:57:02 -- bdev/bdev_raid.sh@380 -- # '[' -z fc9fc0c2-2589-4f83-b720-354616e0aec2 ']' 00:14:34.867 20:57:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:35.125 [2024-06-09 20:57:03.101099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.125 [2024-06-09 20:57:03.101121] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.125 [2024-06-09 20:57:03.101204] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.125 [2024-06-09 20:57:03.101258] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.125 [2024-06-09 20:57:03.101270] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:14:35.125 20:57:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.125 20:57:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:35.382 20:57:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:35.382 20:57:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:35.382 20:57:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.382 20:57:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:35.382 20:57:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.382 20:57:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:35.639 20:57:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:35.639 20:57:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:35.897 20:57:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:35.897 20:57:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:35.897 20:57:04 -- common/autotest_common.sh@640 -- # local es=0 00:14:35.897 20:57:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:35.897 20:57:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.897 20:57:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.897 20:57:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.897 20:57:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.897 20:57:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.897 20:57:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:35.897 20:57:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.897 20:57:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:35.897 20:57:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:36.154 [2024-06-09 20:57:04.253296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:36.154 [2024-06-09 20:57:04.255270] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:36.154 [2024-06-09 20:57:04.255336] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:36.154 [2024-06-09 20:57:04.255404] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:36.154 [2024-06-09 20:57:04.255441] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.154 [2024-06-09 20:57:04.255451] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:14:36.154 request: 00:14:36.154 { 00:14:36.154 "name": "raid_bdev1", 00:14:36.154 "raid_level": "concat", 00:14:36.154 "base_bdevs": [ 00:14:36.154 "malloc1", 00:14:36.154 "malloc2" 00:14:36.154 ], 00:14:36.154 "superblock": false, 00:14:36.154 "strip_size_kb": 64, 00:14:36.154 "method": "bdev_raid_create", 00:14:36.154 "req_id": 1 00:14:36.154 } 00:14:36.154 Got JSON-RPC error response 00:14:36.154 response: 00:14:36.154 { 00:14:36.154 "code": -17, 00:14:36.154 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:36.154 } 00:14:36.154 20:57:04 -- common/autotest_common.sh@643 -- # es=1 00:14:36.154 20:57:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:36.154 20:57:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:36.154 20:57:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:36.154 20:57:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:36.154 20:57:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.412 20:57:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:36.412 20:57:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:36.412 20:57:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.670 [2024-06-09 20:57:04.693327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.670 [2024-06-09 20:57:04.693415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.670 [2024-06-09 20:57:04.693451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:36.670 [2024-06-09 20:57:04.693477] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.670 [2024-06-09 20:57:04.695861] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.670 [2024-06-09 20:57:04.695917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.670 [2024-06-09 20:57:04.695998] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:36.670 [2024-06-09 20:57:04.696047] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.670 pt1 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.670 20:57:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.928 20:57:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.928 "name": "raid_bdev1", 00:14:36.928 "uuid": "fc9fc0c2-2589-4f83-b720-354616e0aec2", 00:14:36.928 "strip_size_kb": 64, 00:14:36.928 "state": "configuring", 00:14:36.928 "raid_level": "concat", 00:14:36.928 "superblock": true, 00:14:36.928 "num_base_bdevs": 2, 00:14:36.928 "num_base_bdevs_discovered": 1, 00:14:36.928 "num_base_bdevs_operational": 2, 00:14:36.928 "base_bdevs_list": [ 00:14:36.928 { 00:14:36.928 "name": "pt1", 00:14:36.928 "uuid": "cde2bfa2-e531-5731-8c17-09b35758c273", 00:14:36.928 "is_configured": true, 00:14:36.928 "data_offset": 2048, 00:14:36.928 "data_size": 63488 00:14:36.928 }, 00:14:36.928 { 00:14:36.928 "name": null, 00:14:36.928 "uuid": "c7f2c3be-99a4-5834-aa11-8b2e3aa5fd7b", 00:14:36.928 "is_configured": false, 00:14:36.928 "data_offset": 2048, 00:14:36.928 "data_size": 63488 00:14:36.928 } 00:14:36.928 ] 00:14:36.928 }' 00:14:36.928 20:57:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.928 20:57:04 -- common/autotest_common.sh@10 -- # set +x 00:14:37.494 20:57:05 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:37.494 20:57:05 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:37.494 20:57:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:37.494 20:57:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.764 [2024-06-09 20:57:05.749580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.764 [2024-06-09 20:57:05.749684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.764 [2024-06-09 20:57:05.749725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:37.764 [2024-06-09 20:57:05.749752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.764 [2024-06-09 20:57:05.750221] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.764 [2024-06-09 20:57:05.750262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.764 [2024-06-09 20:57:05.750355] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:37.764 [2024-06-09 20:57:05.750379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.764 [2024-06-09 20:57:05.750502] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:14:37.764 [2024-06-09 20:57:05.750514] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.764 [2024-06-09 20:57:05.750634] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:37.764 [2024-06-09 20:57:05.750940] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:14:37.764 [2024-06-09 20:57:05.750954] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:14:37.764 [2024-06-09 20:57:05.751078] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.764 pt2 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.764 20:57:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:37.765 20:57:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.765 20:57:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.765 20:57:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.765 20:57:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.765 20:57:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.765 20:57:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.101 20:57:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.101 "name": "raid_bdev1", 00:14:38.101 "uuid": "fc9fc0c2-2589-4f83-b720-354616e0aec2", 00:14:38.101 "strip_size_kb": 64, 00:14:38.101 "state": "online", 00:14:38.101 "raid_level": "concat", 00:14:38.101 "superblock": true, 00:14:38.101 "num_base_bdevs": 2, 00:14:38.101 "num_base_bdevs_discovered": 2, 00:14:38.101 "num_base_bdevs_operational": 2, 00:14:38.101 "base_bdevs_list": [ 00:14:38.101 { 00:14:38.101 "name": "pt1", 00:14:38.101 "uuid": "cde2bfa2-e531-5731-8c17-09b35758c273", 00:14:38.101 "is_configured": true, 00:14:38.101 "data_offset": 2048, 00:14:38.101 "data_size": 63488 00:14:38.101 }, 00:14:38.101 { 00:14:38.101 "name": "pt2", 00:14:38.101 "uuid": "c7f2c3be-99a4-5834-aa11-8b2e3aa5fd7b", 00:14:38.101 "is_configured": true, 00:14:38.101 "data_offset": 2048, 00:14:38.101 "data_size": 63488 00:14:38.101 } 00:14:38.101 ] 00:14:38.101 }' 00:14:38.101 20:57:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.101 20:57:05 -- common/autotest_common.sh@10 -- # set +x 00:14:38.668 20:57:06 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:38.668 20:57:06 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:38.927 [2024-06-09 20:57:06.862095] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.927 20:57:06 -- bdev/bdev_raid.sh@430 -- # '[' fc9fc0c2-2589-4f83-b720-354616e0aec2 '!=' fc9fc0c2-2589-4f83-b720-354616e0aec2 ']' 00:14:38.927 20:57:06 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:38.927 20:57:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:38.927 20:57:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:38.927 20:57:06 -- bdev/bdev_raid.sh@511 -- # killprocess 113027 00:14:38.927 20:57:06 -- common/autotest_common.sh@926 -- # '[' -z 113027 ']' 00:14:38.927 20:57:06 -- common/autotest_common.sh@930 -- # kill -0 113027 00:14:38.927 20:57:06 -- common/autotest_common.sh@931 -- # uname 00:14:38.927 20:57:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:38.927 20:57:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113027 00:14:38.927 20:57:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:38.927 20:57:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:38.927 20:57:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113027' 00:14:38.927 killing process with pid 113027 00:14:38.927 20:57:06 -- common/autotest_common.sh@945 -- # kill 113027 00:14:38.927 [2024-06-09 20:57:06.902308] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.927 [2024-06-09 20:57:06.902400] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.927 [2024-06-09 20:57:06.902467] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.927 [2024-06-09 20:57:06.902482] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:14:38.927 20:57:06 -- common/autotest_common.sh@950 -- # wait 113027 00:14:38.927 [2024-06-09 20:57:07.035449] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.305 ************************************ 00:14:40.305 END TEST raid_superblock_test 00:14:40.305 ************************************ 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:40.305 00:14:40.305 real 0m8.270s 00:14:40.305 user 0m13.993s 00:14:40.305 sys 0m1.059s 00:14:40.305 20:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.305 20:57:08 -- common/autotest_common.sh@10 -- # set +x 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:40.305 20:57:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:40.305 20:57:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.305 20:57:08 -- common/autotest_common.sh@10 -- # set +x 00:14:40.305 ************************************ 00:14:40.305 START TEST raid_state_function_test 00:14:40.305 ************************************ 00:14:40.305 20:57:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=113272 00:14:40.305 Process raid pid: 113272 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113272' 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:40.305 20:57:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113272 /var/tmp/spdk-raid.sock 00:14:40.305 20:57:08 -- common/autotest_common.sh@819 -- # '[' -z 113272 ']' 00:14:40.305 20:57:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.305 20:57:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.305 20:57:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.305 20:57:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.305 20:57:08 -- common/autotest_common.sh@10 -- # set +x 00:14:40.305 [2024-06-09 20:57:08.171685] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:40.305 [2024-06-09 20:57:08.171883] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.305 [2024-06-09 20:57:08.323163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.564 [2024-06-09 20:57:08.511338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.564 [2024-06-09 20:57:08.700240] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.132 20:57:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:41.132 20:57:09 -- common/autotest_common.sh@852 -- # return 0 00:14:41.132 20:57:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:41.132 [2024-06-09 20:57:09.292884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.132 [2024-06-09 20:57:09.292982] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.132 [2024-06-09 20:57:09.292996] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.132 [2024-06-09 20:57:09.293014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.390 20:57:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.390 "name": "Existed_Raid", 00:14:41.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.390 "strip_size_kb": 0, 00:14:41.390 "state": "configuring", 00:14:41.390 "raid_level": "raid1", 00:14:41.390 "superblock": false, 00:14:41.390 "num_base_bdevs": 2, 00:14:41.390 "num_base_bdevs_discovered": 0, 00:14:41.390 "num_base_bdevs_operational": 2, 00:14:41.390 "base_bdevs_list": [ 00:14:41.390 { 00:14:41.390 "name": "BaseBdev1", 00:14:41.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.390 "is_configured": false, 00:14:41.390 "data_offset": 0, 00:14:41.390 "data_size": 0 00:14:41.390 }, 00:14:41.390 { 00:14:41.390 "name": "BaseBdev2", 00:14:41.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.390 "is_configured": false, 00:14:41.390 "data_offset": 0, 00:14:41.390 "data_size": 0 00:14:41.390 } 00:14:41.390 ] 00:14:41.390 }' 00:14:41.648 20:57:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.648 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:14:42.214 20:57:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.214 [2024-06-09 20:57:10.344967] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.214 [2024-06-09 20:57:10.345007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:42.214 20:57:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:42.473 [2024-06-09 20:57:10.633018] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.473 [2024-06-09 20:57:10.633091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.473 [2024-06-09 20:57:10.633102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.473 [2024-06-09 20:57:10.633127] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.731 20:57:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.731 [2024-06-09 20:57:10.862508] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.731 BaseBdev1 00:14:42.731 20:57:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:42.731 20:57:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:42.731 20:57:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:42.731 20:57:10 -- common/autotest_common.sh@889 -- # local i 00:14:42.731 20:57:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:42.731 20:57:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:42.731 20:57:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.989 20:57:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.248 [ 00:14:43.248 { 00:14:43.248 "name": "BaseBdev1", 00:14:43.248 "aliases": [ 00:14:43.248 "06541460-6d3f-40d6-b0d6-204bfcae0cf6" 00:14:43.248 ], 00:14:43.248 "product_name": "Malloc disk", 00:14:43.248 "block_size": 512, 00:14:43.248 "num_blocks": 65536, 00:14:43.248 "uuid": "06541460-6d3f-40d6-b0d6-204bfcae0cf6", 00:14:43.248 "assigned_rate_limits": { 00:14:43.248 "rw_ios_per_sec": 0, 00:14:43.248 "rw_mbytes_per_sec": 0, 00:14:43.248 "r_mbytes_per_sec": 0, 00:14:43.248 "w_mbytes_per_sec": 0 00:14:43.248 }, 00:14:43.248 "claimed": true, 00:14:43.248 "claim_type": "exclusive_write", 00:14:43.248 "zoned": false, 00:14:43.248 "supported_io_types": { 00:14:43.248 "read": true, 00:14:43.248 "write": true, 00:14:43.248 "unmap": true, 00:14:43.248 "write_zeroes": true, 00:14:43.248 "flush": true, 00:14:43.248 "reset": true, 00:14:43.248 "compare": false, 00:14:43.248 "compare_and_write": false, 00:14:43.248 "abort": true, 00:14:43.248 "nvme_admin": false, 00:14:43.248 "nvme_io": false 00:14:43.248 }, 00:14:43.248 "memory_domains": [ 00:14:43.248 { 00:14:43.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.248 "dma_device_type": 2 00:14:43.248 } 00:14:43.248 ], 00:14:43.248 "driver_specific": {} 00:14:43.248 } 00:14:43.248 ] 00:14:43.248 20:57:11 -- common/autotest_common.sh@895 -- # return 0 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.248 20:57:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.507 20:57:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:43.507 "name": "Existed_Raid", 00:14:43.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.507 "strip_size_kb": 0, 00:14:43.507 "state": "configuring", 00:14:43.507 "raid_level": "raid1", 00:14:43.507 "superblock": false, 00:14:43.507 "num_base_bdevs": 2, 00:14:43.507 "num_base_bdevs_discovered": 1, 00:14:43.507 "num_base_bdevs_operational": 2, 00:14:43.507 "base_bdevs_list": [ 00:14:43.507 { 00:14:43.507 "name": "BaseBdev1", 00:14:43.507 "uuid": "06541460-6d3f-40d6-b0d6-204bfcae0cf6", 00:14:43.507 "is_configured": true, 00:14:43.507 "data_offset": 0, 00:14:43.507 "data_size": 65536 00:14:43.507 }, 00:14:43.507 { 00:14:43.507 "name": "BaseBdev2", 00:14:43.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.507 "is_configured": false, 00:14:43.507 "data_offset": 0, 00:14:43.507 "data_size": 0 00:14:43.507 } 00:14:43.507 ] 00:14:43.507 }' 00:14:43.507 20:57:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:43.507 20:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:44.084 20:57:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.345 [2024-06-09 20:57:12.398865] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.345 [2024-06-09 20:57:12.398934] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:44.345 20:57:12 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:44.345 20:57:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:44.603 [2024-06-09 20:57:12.646947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.603 [2024-06-09 20:57:12.648934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.603 [2024-06-09 20:57:12.649008] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:44.603 20:57:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.604 20:57:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.604 20:57:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.604 20:57:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.604 20:57:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.604 20:57:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.862 20:57:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.862 "name": "Existed_Raid", 00:14:44.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.862 "strip_size_kb": 0, 00:14:44.862 "state": "configuring", 00:14:44.862 "raid_level": "raid1", 00:14:44.862 "superblock": false, 00:14:44.862 "num_base_bdevs": 2, 00:14:44.862 "num_base_bdevs_discovered": 1, 00:14:44.862 "num_base_bdevs_operational": 2, 00:14:44.862 "base_bdevs_list": [ 00:14:44.862 { 00:14:44.862 "name": "BaseBdev1", 00:14:44.862 "uuid": "06541460-6d3f-40d6-b0d6-204bfcae0cf6", 00:14:44.862 "is_configured": true, 00:14:44.862 "data_offset": 0, 00:14:44.862 "data_size": 65536 00:14:44.862 }, 00:14:44.862 { 00:14:44.862 "name": "BaseBdev2", 00:14:44.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.862 "is_configured": false, 00:14:44.862 "data_offset": 0, 00:14:44.862 "data_size": 0 00:14:44.862 } 00:14:44.862 ] 00:14:44.862 }' 00:14:44.862 20:57:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.862 20:57:12 -- common/autotest_common.sh@10 -- # set +x 00:14:45.428 20:57:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.687 [2024-06-09 20:57:13.693447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.687 [2024-06-09 20:57:13.693494] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:45.687 [2024-06-09 20:57:13.693503] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:45.687 [2024-06-09 20:57:13.693616] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:45.687 [2024-06-09 20:57:13.693951] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:45.687 [2024-06-09 20:57:13.693973] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:45.687 [2024-06-09 20:57:13.694237] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.687 BaseBdev2 00:14:45.687 20:57:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:45.687 20:57:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:45.687 20:57:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:45.687 20:57:13 -- common/autotest_common.sh@889 -- # local i 00:14:45.687 20:57:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:45.687 20:57:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:45.687 20:57:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.945 20:57:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.204 [ 00:14:46.204 { 00:14:46.204 "name": "BaseBdev2", 00:14:46.204 "aliases": [ 00:14:46.204 "82c52ce3-8904-4767-af00-da1b76bb8f7b" 00:14:46.204 ], 00:14:46.204 "product_name": "Malloc disk", 00:14:46.204 "block_size": 512, 00:14:46.204 "num_blocks": 65536, 00:14:46.204 "uuid": "82c52ce3-8904-4767-af00-da1b76bb8f7b", 00:14:46.204 "assigned_rate_limits": { 00:14:46.204 "rw_ios_per_sec": 0, 00:14:46.204 "rw_mbytes_per_sec": 0, 00:14:46.204 "r_mbytes_per_sec": 0, 00:14:46.204 "w_mbytes_per_sec": 0 00:14:46.204 }, 00:14:46.204 "claimed": true, 00:14:46.204 "claim_type": "exclusive_write", 00:14:46.204 "zoned": false, 00:14:46.204 "supported_io_types": { 00:14:46.204 "read": true, 00:14:46.204 "write": true, 00:14:46.204 "unmap": true, 00:14:46.204 "write_zeroes": true, 00:14:46.204 "flush": true, 00:14:46.204 "reset": true, 00:14:46.204 "compare": false, 00:14:46.204 "compare_and_write": false, 00:14:46.204 "abort": true, 00:14:46.204 "nvme_admin": false, 00:14:46.204 "nvme_io": false 00:14:46.204 }, 00:14:46.204 "memory_domains": [ 00:14:46.204 { 00:14:46.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.204 "dma_device_type": 2 00:14:46.204 } 00:14:46.204 ], 00:14:46.204 "driver_specific": {} 00:14:46.204 } 00:14:46.204 ] 00:14:46.204 20:57:14 -- common/autotest_common.sh@895 -- # return 0 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.204 "name": "Existed_Raid", 00:14:46.204 "uuid": "ebf12f1c-8aca-4f25-84f2-4b55337da8dc", 00:14:46.204 "strip_size_kb": 0, 00:14:46.204 "state": "online", 00:14:46.204 "raid_level": "raid1", 00:14:46.204 "superblock": false, 00:14:46.204 "num_base_bdevs": 2, 00:14:46.204 "num_base_bdevs_discovered": 2, 00:14:46.204 "num_base_bdevs_operational": 2, 00:14:46.204 "base_bdevs_list": [ 00:14:46.204 { 00:14:46.204 "name": "BaseBdev1", 00:14:46.204 "uuid": "06541460-6d3f-40d6-b0d6-204bfcae0cf6", 00:14:46.204 "is_configured": true, 00:14:46.204 "data_offset": 0, 00:14:46.204 "data_size": 65536 00:14:46.204 }, 00:14:46.204 { 00:14:46.204 "name": "BaseBdev2", 00:14:46.204 "uuid": "82c52ce3-8904-4767-af00-da1b76bb8f7b", 00:14:46.204 "is_configured": true, 00:14:46.204 "data_offset": 0, 00:14:46.204 "data_size": 65536 00:14:46.204 } 00:14:46.204 ] 00:14:46.204 }' 00:14:46.204 20:57:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.204 20:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.771 20:57:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:47.030 [2024-06-09 20:57:15.149823] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.288 20:57:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.547 20:57:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.547 "name": "Existed_Raid", 00:14:47.547 "uuid": "ebf12f1c-8aca-4f25-84f2-4b55337da8dc", 00:14:47.547 "strip_size_kb": 0, 00:14:47.547 "state": "online", 00:14:47.547 "raid_level": "raid1", 00:14:47.547 "superblock": false, 00:14:47.547 "num_base_bdevs": 2, 00:14:47.547 "num_base_bdevs_discovered": 1, 00:14:47.547 "num_base_bdevs_operational": 1, 00:14:47.547 "base_bdevs_list": [ 00:14:47.547 { 00:14:47.547 "name": null, 00:14:47.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.547 "is_configured": false, 00:14:47.547 "data_offset": 0, 00:14:47.547 "data_size": 65536 00:14:47.547 }, 00:14:47.547 { 00:14:47.547 "name": "BaseBdev2", 00:14:47.547 "uuid": "82c52ce3-8904-4767-af00-da1b76bb8f7b", 00:14:47.547 "is_configured": true, 00:14:47.547 "data_offset": 0, 00:14:47.547 "data_size": 65536 00:14:47.547 } 00:14:47.547 ] 00:14:47.547 }' 00:14:47.547 20:57:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.547 20:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:48.114 20:57:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:48.114 20:57:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:48.114 20:57:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.114 20:57:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:48.373 20:57:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:48.373 20:57:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.373 20:57:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:48.373 [2024-06-09 20:57:16.510370] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.373 [2024-06-09 20:57:16.510403] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.373 [2024-06-09 20:57:16.510495] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.632 [2024-06-09 20:57:16.578874] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.632 [2024-06-09 20:57:16.578927] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:48.632 20:57:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:48.632 20:57:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:48.632 20:57:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.632 20:57:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.632 20:57:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:48.632 20:57:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:48.632 20:57:16 -- bdev/bdev_raid.sh@287 -- # killprocess 113272 00:14:48.632 20:57:16 -- common/autotest_common.sh@926 -- # '[' -z 113272 ']' 00:14:48.632 20:57:16 -- common/autotest_common.sh@930 -- # kill -0 113272 00:14:48.632 20:57:16 -- common/autotest_common.sh@931 -- # uname 00:14:48.632 20:57:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:48.632 20:57:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113272 00:14:48.891 20:57:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:48.892 20:57:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:48.892 20:57:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113272' 00:14:48.892 killing process with pid 113272 00:14:48.892 20:57:16 -- common/autotest_common.sh@945 -- # kill 113272 00:14:48.892 [2024-06-09 20:57:16.812435] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.892 20:57:16 -- common/autotest_common.sh@950 -- # wait 113272 00:14:48.892 [2024-06-09 20:57:16.812550] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.828 ************************************ 00:14:49.828 END TEST raid_state_function_test 00:14:49.828 ************************************ 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:49.828 00:14:49.828 real 0m9.723s 00:14:49.828 user 0m16.702s 00:14:49.828 sys 0m1.261s 00:14:49.828 20:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.828 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:14:49.828 20:57:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:49.828 20:57:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:49.828 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:49.828 ************************************ 00:14:49.828 START TEST raid_state_function_test_sb 00:14:49.828 ************************************ 00:14:49.828 20:57:17 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=113594 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113594' 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:49.828 Process raid pid: 113594 00:14:49.828 20:57:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113594 /var/tmp/spdk-raid.sock 00:14:49.828 20:57:17 -- common/autotest_common.sh@819 -- # '[' -z 113594 ']' 00:14:49.828 20:57:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:49.828 20:57:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.828 20:57:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:49.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:49.828 20:57:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.828 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:49.828 [2024-06-09 20:57:17.968967] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:49.828 [2024-06-09 20:57:17.969152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.087 [2024-06-09 20:57:18.134426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.358 [2024-06-09 20:57:18.321325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.358 [2024-06-09 20:57:18.513353] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.939 20:57:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:50.939 20:57:18 -- common/autotest_common.sh@852 -- # return 0 00:14:50.939 20:57:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:50.939 [2024-06-09 20:57:19.040250] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.939 [2024-06-09 20:57:19.040332] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.939 [2024-06-09 20:57:19.040344] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.939 [2024-06-09 20:57:19.040365] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.939 20:57:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.198 20:57:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.198 "name": "Existed_Raid", 00:14:51.198 "uuid": "fe44ff65-a6a1-4439-b761-ec61c05f569d", 00:14:51.198 "strip_size_kb": 0, 00:14:51.198 "state": "configuring", 00:14:51.198 "raid_level": "raid1", 00:14:51.198 "superblock": true, 00:14:51.198 "num_base_bdevs": 2, 00:14:51.198 "num_base_bdevs_discovered": 0, 00:14:51.198 "num_base_bdevs_operational": 2, 00:14:51.198 "base_bdevs_list": [ 00:14:51.198 { 00:14:51.198 "name": "BaseBdev1", 00:14:51.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.198 "is_configured": false, 00:14:51.198 "data_offset": 0, 00:14:51.198 "data_size": 0 00:14:51.198 }, 00:14:51.198 { 00:14:51.198 "name": "BaseBdev2", 00:14:51.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.198 "is_configured": false, 00:14:51.198 "data_offset": 0, 00:14:51.198 "data_size": 0 00:14:51.198 } 00:14:51.198 ] 00:14:51.198 }' 00:14:51.198 20:57:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.198 20:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:51.764 20:57:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:52.022 [2024-06-09 20:57:20.076327] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.022 [2024-06-09 20:57:20.076366] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:52.022 20:57:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:52.281 [2024-06-09 20:57:20.276394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.281 [2024-06-09 20:57:20.276469] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.281 [2024-06-09 20:57:20.276481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.281 [2024-06-09 20:57:20.276505] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.281 20:57:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.539 [2024-06-09 20:57:20.524098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.539 BaseBdev1 00:14:52.539 20:57:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:52.539 20:57:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:52.539 20:57:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:52.539 20:57:20 -- common/autotest_common.sh@889 -- # local i 00:14:52.539 20:57:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:52.539 20:57:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:52.539 20:57:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.797 20:57:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.797 [ 00:14:52.797 { 00:14:52.797 "name": "BaseBdev1", 00:14:52.797 "aliases": [ 00:14:52.797 "50951354-72e5-4f57-80f4-2bac79930676" 00:14:52.797 ], 00:14:52.797 "product_name": "Malloc disk", 00:14:52.797 "block_size": 512, 00:14:52.797 "num_blocks": 65536, 00:14:52.797 "uuid": "50951354-72e5-4f57-80f4-2bac79930676", 00:14:52.797 "assigned_rate_limits": { 00:14:52.797 "rw_ios_per_sec": 0, 00:14:52.797 "rw_mbytes_per_sec": 0, 00:14:52.797 "r_mbytes_per_sec": 0, 00:14:52.797 "w_mbytes_per_sec": 0 00:14:52.797 }, 00:14:52.797 "claimed": true, 00:14:52.797 "claim_type": "exclusive_write", 00:14:52.797 "zoned": false, 00:14:52.797 "supported_io_types": { 00:14:52.797 "read": true, 00:14:52.797 "write": true, 00:14:52.797 "unmap": true, 00:14:52.797 "write_zeroes": true, 00:14:52.797 "flush": true, 00:14:52.797 "reset": true, 00:14:52.797 "compare": false, 00:14:52.797 "compare_and_write": false, 00:14:52.797 "abort": true, 00:14:52.797 "nvme_admin": false, 00:14:52.797 "nvme_io": false 00:14:52.797 }, 00:14:52.797 "memory_domains": [ 00:14:52.797 { 00:14:52.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.797 "dma_device_type": 2 00:14:52.797 } 00:14:52.797 ], 00:14:52.798 "driver_specific": {} 00:14:52.798 } 00:14:52.798 ] 00:14:53.056 20:57:20 -- common/autotest_common.sh@895 -- # return 0 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.056 20:57:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.056 20:57:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.056 "name": "Existed_Raid", 00:14:53.056 "uuid": "91b19eed-52e3-4f2f-85fb-609b223dcca8", 00:14:53.056 "strip_size_kb": 0, 00:14:53.056 "state": "configuring", 00:14:53.056 "raid_level": "raid1", 00:14:53.056 "superblock": true, 00:14:53.056 "num_base_bdevs": 2, 00:14:53.056 "num_base_bdevs_discovered": 1, 00:14:53.056 "num_base_bdevs_operational": 2, 00:14:53.056 "base_bdevs_list": [ 00:14:53.056 { 00:14:53.056 "name": "BaseBdev1", 00:14:53.056 "uuid": "50951354-72e5-4f57-80f4-2bac79930676", 00:14:53.056 "is_configured": true, 00:14:53.056 "data_offset": 2048, 00:14:53.056 "data_size": 63488 00:14:53.056 }, 00:14:53.056 { 00:14:53.056 "name": "BaseBdev2", 00:14:53.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.056 "is_configured": false, 00:14:53.056 "data_offset": 0, 00:14:53.056 "data_size": 0 00:14:53.056 } 00:14:53.056 ] 00:14:53.056 }' 00:14:53.056 20:57:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.056 20:57:21 -- common/autotest_common.sh@10 -- # set +x 00:14:53.991 20:57:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:53.991 [2024-06-09 20:57:22.072384] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.991 [2024-06-09 20:57:22.072428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:53.991 20:57:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:53.991 20:57:22 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:54.250 20:57:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.509 BaseBdev1 00:14:54.509 20:57:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:54.509 20:57:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:54.509 20:57:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:54.509 20:57:22 -- common/autotest_common.sh@889 -- # local i 00:14:54.509 20:57:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:54.509 20:57:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:54.509 20:57:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.767 20:57:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.026 [ 00:14:55.026 { 00:14:55.026 "name": "BaseBdev1", 00:14:55.026 "aliases": [ 00:14:55.026 "842f9f86-7d40-40b2-96d8-e0f17801de4b" 00:14:55.026 ], 00:14:55.026 "product_name": "Malloc disk", 00:14:55.026 "block_size": 512, 00:14:55.026 "num_blocks": 65536, 00:14:55.026 "uuid": "842f9f86-7d40-40b2-96d8-e0f17801de4b", 00:14:55.026 "assigned_rate_limits": { 00:14:55.026 "rw_ios_per_sec": 0, 00:14:55.026 "rw_mbytes_per_sec": 0, 00:14:55.026 "r_mbytes_per_sec": 0, 00:14:55.026 "w_mbytes_per_sec": 0 00:14:55.026 }, 00:14:55.026 "claimed": false, 00:14:55.026 "zoned": false, 00:14:55.026 "supported_io_types": { 00:14:55.026 "read": true, 00:14:55.026 "write": true, 00:14:55.026 "unmap": true, 00:14:55.026 "write_zeroes": true, 00:14:55.026 "flush": true, 00:14:55.026 "reset": true, 00:14:55.026 "compare": false, 00:14:55.026 "compare_and_write": false, 00:14:55.026 "abort": true, 00:14:55.026 "nvme_admin": false, 00:14:55.026 "nvme_io": false 00:14:55.026 }, 00:14:55.026 "memory_domains": [ 00:14:55.026 { 00:14:55.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.026 "dma_device_type": 2 00:14:55.026 } 00:14:55.026 ], 00:14:55.026 "driver_specific": {} 00:14:55.026 } 00:14:55.026 ] 00:14:55.026 20:57:23 -- common/autotest_common.sh@895 -- # return 0 00:14:55.026 20:57:23 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:55.285 [2024-06-09 20:57:23.231421] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.285 [2024-06-09 20:57:23.233416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.285 [2024-06-09 20:57:23.233478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.285 20:57:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.543 20:57:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.543 "name": "Existed_Raid", 00:14:55.543 "uuid": "87844d36-b668-4010-aeee-bfc4f4378558", 00:14:55.543 "strip_size_kb": 0, 00:14:55.543 "state": "configuring", 00:14:55.543 "raid_level": "raid1", 00:14:55.543 "superblock": true, 00:14:55.543 "num_base_bdevs": 2, 00:14:55.543 "num_base_bdevs_discovered": 1, 00:14:55.543 "num_base_bdevs_operational": 2, 00:14:55.543 "base_bdevs_list": [ 00:14:55.543 { 00:14:55.543 "name": "BaseBdev1", 00:14:55.543 "uuid": "842f9f86-7d40-40b2-96d8-e0f17801de4b", 00:14:55.543 "is_configured": true, 00:14:55.543 "data_offset": 2048, 00:14:55.543 "data_size": 63488 00:14:55.543 }, 00:14:55.543 { 00:14:55.543 "name": "BaseBdev2", 00:14:55.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.543 "is_configured": false, 00:14:55.543 "data_offset": 0, 00:14:55.543 "data_size": 0 00:14:55.543 } 00:14:55.543 ] 00:14:55.544 }' 00:14:55.544 20:57:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.544 20:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:56.110 20:57:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.369 [2024-06-09 20:57:24.376768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.369 [2024-06-09 20:57:24.377036] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:14:56.369 [2024-06-09 20:57:24.377052] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:56.369 [2024-06-09 20:57:24.377218] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:56.369 [2024-06-09 20:57:24.377631] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:14:56.369 [2024-06-09 20:57:24.377655] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:14:56.369 [2024-06-09 20:57:24.377837] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.369 BaseBdev2 00:14:56.369 20:57:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:56.369 20:57:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:56.369 20:57:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:56.369 20:57:24 -- common/autotest_common.sh@889 -- # local i 00:14:56.369 20:57:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:56.369 20:57:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:56.369 20:57:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.627 20:57:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.885 [ 00:14:56.885 { 00:14:56.885 "name": "BaseBdev2", 00:14:56.885 "aliases": [ 00:14:56.885 "f27a5aac-7378-44c9-9387-d5ded6f1cf8b" 00:14:56.885 ], 00:14:56.885 "product_name": "Malloc disk", 00:14:56.885 "block_size": 512, 00:14:56.885 "num_blocks": 65536, 00:14:56.885 "uuid": "f27a5aac-7378-44c9-9387-d5ded6f1cf8b", 00:14:56.885 "assigned_rate_limits": { 00:14:56.885 "rw_ios_per_sec": 0, 00:14:56.885 "rw_mbytes_per_sec": 0, 00:14:56.885 "r_mbytes_per_sec": 0, 00:14:56.885 "w_mbytes_per_sec": 0 00:14:56.885 }, 00:14:56.885 "claimed": true, 00:14:56.885 "claim_type": "exclusive_write", 00:14:56.885 "zoned": false, 00:14:56.885 "supported_io_types": { 00:14:56.885 "read": true, 00:14:56.885 "write": true, 00:14:56.885 "unmap": true, 00:14:56.885 "write_zeroes": true, 00:14:56.885 "flush": true, 00:14:56.885 "reset": true, 00:14:56.885 "compare": false, 00:14:56.885 "compare_and_write": false, 00:14:56.885 "abort": true, 00:14:56.885 "nvme_admin": false, 00:14:56.885 "nvme_io": false 00:14:56.885 }, 00:14:56.885 "memory_domains": [ 00:14:56.885 { 00:14:56.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.885 "dma_device_type": 2 00:14:56.885 } 00:14:56.885 ], 00:14:56.885 "driver_specific": {} 00:14:56.885 } 00:14:56.885 ] 00:14:56.885 20:57:24 -- common/autotest_common.sh@895 -- # return 0 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.885 20:57:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.143 20:57:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.143 "name": "Existed_Raid", 00:14:57.143 "uuid": "87844d36-b668-4010-aeee-bfc4f4378558", 00:14:57.143 "strip_size_kb": 0, 00:14:57.143 "state": "online", 00:14:57.143 "raid_level": "raid1", 00:14:57.143 "superblock": true, 00:14:57.143 "num_base_bdevs": 2, 00:14:57.143 "num_base_bdevs_discovered": 2, 00:14:57.143 "num_base_bdevs_operational": 2, 00:14:57.143 "base_bdevs_list": [ 00:14:57.143 { 00:14:57.143 "name": "BaseBdev1", 00:14:57.143 "uuid": "842f9f86-7d40-40b2-96d8-e0f17801de4b", 00:14:57.143 "is_configured": true, 00:14:57.143 "data_offset": 2048, 00:14:57.143 "data_size": 63488 00:14:57.143 }, 00:14:57.143 { 00:14:57.143 "name": "BaseBdev2", 00:14:57.143 "uuid": "f27a5aac-7378-44c9-9387-d5ded6f1cf8b", 00:14:57.143 "is_configured": true, 00:14:57.143 "data_offset": 2048, 00:14:57.143 "data_size": 63488 00:14:57.143 } 00:14:57.143 ] 00:14:57.143 }' 00:14:57.143 20:57:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.143 20:57:25 -- common/autotest_common.sh@10 -- # set +x 00:14:57.709 20:57:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:57.968 [2024-06-09 20:57:25.989251] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.968 20:57:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.225 20:57:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.225 "name": "Existed_Raid", 00:14:58.225 "uuid": "87844d36-b668-4010-aeee-bfc4f4378558", 00:14:58.225 "strip_size_kb": 0, 00:14:58.225 "state": "online", 00:14:58.225 "raid_level": "raid1", 00:14:58.225 "superblock": true, 00:14:58.225 "num_base_bdevs": 2, 00:14:58.225 "num_base_bdevs_discovered": 1, 00:14:58.225 "num_base_bdevs_operational": 1, 00:14:58.225 "base_bdevs_list": [ 00:14:58.225 { 00:14:58.225 "name": null, 00:14:58.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.225 "is_configured": false, 00:14:58.225 "data_offset": 2048, 00:14:58.225 "data_size": 63488 00:14:58.225 }, 00:14:58.225 { 00:14:58.225 "name": "BaseBdev2", 00:14:58.225 "uuid": "f27a5aac-7378-44c9-9387-d5ded6f1cf8b", 00:14:58.225 "is_configured": true, 00:14:58.225 "data_offset": 2048, 00:14:58.225 "data_size": 63488 00:14:58.225 } 00:14:58.225 ] 00:14:58.225 }' 00:14:58.225 20:57:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.225 20:57:26 -- common/autotest_common.sh@10 -- # set +x 00:14:58.791 20:57:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:58.791 20:57:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:58.791 20:57:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.791 20:57:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:59.049 20:57:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:59.049 20:57:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.049 20:57:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:59.307 [2024-06-09 20:57:27.335214] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.307 [2024-06-09 20:57:27.335259] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.307 [2024-06-09 20:57:27.335323] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.307 [2024-06-09 20:57:27.403258] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.307 [2024-06-09 20:57:27.403294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:14:59.307 20:57:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:59.307 20:57:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:59.307 20:57:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.307 20:57:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.566 20:57:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:59.566 20:57:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:59.566 20:57:27 -- bdev/bdev_raid.sh@287 -- # killprocess 113594 00:14:59.566 20:57:27 -- common/autotest_common.sh@926 -- # '[' -z 113594 ']' 00:14:59.566 20:57:27 -- common/autotest_common.sh@930 -- # kill -0 113594 00:14:59.566 20:57:27 -- common/autotest_common.sh@931 -- # uname 00:14:59.566 20:57:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:59.566 20:57:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113594 00:14:59.566 20:57:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:59.566 20:57:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:59.566 20:57:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113594' 00:14:59.566 killing process with pid 113594 00:14:59.566 20:57:27 -- common/autotest_common.sh@945 -- # kill 113594 00:14:59.566 [2024-06-09 20:57:27.642195] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.566 20:57:27 -- common/autotest_common.sh@950 -- # wait 113594 00:14:59.566 [2024-06-09 20:57:27.642348] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:00.939 00:15:00.939 real 0m10.778s 00:15:00.939 user 0m18.698s 00:15:00.939 sys 0m1.302s 00:15:00.939 20:57:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.939 ************************************ 00:15:00.939 END TEST raid_state_function_test_sb 00:15:00.939 ************************************ 00:15:00.939 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:00.939 20:57:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:00.939 20:57:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.939 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:15:00.939 ************************************ 00:15:00.939 START TEST raid_superblock_test 00:15:00.939 ************************************ 00:15:00.939 20:57:28 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@357 -- # raid_pid=113931 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@358 -- # waitforlisten 113931 /var/tmp/spdk-raid.sock 00:15:00.939 20:57:28 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:00.939 20:57:28 -- common/autotest_common.sh@819 -- # '[' -z 113931 ']' 00:15:00.939 20:57:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:00.939 20:57:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.939 20:57:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:00.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:00.939 20:57:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.939 20:57:28 -- common/autotest_common.sh@10 -- # set +x 00:15:00.939 [2024-06-09 20:57:28.796388] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:00.939 [2024-06-09 20:57:28.796571] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113931 ] 00:15:00.939 [2024-06-09 20:57:28.959388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.197 [2024-06-09 20:57:29.144702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.197 [2024-06-09 20:57:29.330224] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.766 20:57:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.766 20:57:29 -- common/autotest_common.sh@852 -- # return 0 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.766 20:57:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:02.025 malloc1 00:15:02.025 20:57:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.283 [2024-06-09 20:57:30.209286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.283 [2024-06-09 20:57:30.209376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.283 [2024-06-09 20:57:30.209408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:02.283 [2024-06-09 20:57:30.209456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.283 [2024-06-09 20:57:30.211925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.283 [2024-06-09 20:57:30.211977] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.283 pt1 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.283 20:57:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:02.542 malloc2 00:15:02.542 20:57:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.542 [2024-06-09 20:57:30.694218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.542 [2024-06-09 20:57:30.694324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.542 [2024-06-09 20:57:30.694372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:02.542 [2024-06-09 20:57:30.694429] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.542 [2024-06-09 20:57:30.697066] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.542 [2024-06-09 20:57:30.697133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.542 pt2 00:15:02.542 20:57:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:02.542 20:57:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:02.542 20:57:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:02.800 [2024-06-09 20:57:30.922292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.800 [2024-06-09 20:57:30.924302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.800 [2024-06-09 20:57:30.924498] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:02.800 [2024-06-09 20:57:30.924512] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:02.800 [2024-06-09 20:57:30.924669] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:02.800 [2024-06-09 20:57:30.925047] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:02.800 [2024-06-09 20:57:30.925085] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:02.800 [2024-06-09 20:57:30.925226] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.800 20:57:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.058 20:57:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.058 "name": "raid_bdev1", 00:15:03.058 "uuid": "0fdd74f1-9d74-4fda-893e-004225a32efc", 00:15:03.058 "strip_size_kb": 0, 00:15:03.058 "state": "online", 00:15:03.058 "raid_level": "raid1", 00:15:03.058 "superblock": true, 00:15:03.059 "num_base_bdevs": 2, 00:15:03.059 "num_base_bdevs_discovered": 2, 00:15:03.059 "num_base_bdevs_operational": 2, 00:15:03.059 "base_bdevs_list": [ 00:15:03.059 { 00:15:03.059 "name": "pt1", 00:15:03.059 "uuid": "75f0abd6-e37a-50e3-a941-49259247db00", 00:15:03.059 "is_configured": true, 00:15:03.059 "data_offset": 2048, 00:15:03.059 "data_size": 63488 00:15:03.059 }, 00:15:03.059 { 00:15:03.059 "name": "pt2", 00:15:03.059 "uuid": "bbfc5c49-0298-5a44-ad7a-9f1557908857", 00:15:03.059 "is_configured": true, 00:15:03.059 "data_offset": 2048, 00:15:03.059 "data_size": 63488 00:15:03.059 } 00:15:03.059 ] 00:15:03.059 }' 00:15:03.059 20:57:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.059 20:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.625 20:57:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:03.625 20:57:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:03.883 [2024-06-09 20:57:31.978593] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.883 20:57:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0fdd74f1-9d74-4fda-893e-004225a32efc 00:15:03.883 20:57:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 0fdd74f1-9d74-4fda-893e-004225a32efc ']' 00:15:03.883 20:57:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:04.149 [2024-06-09 20:57:32.258462] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.149 [2024-06-09 20:57:32.258489] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.149 [2024-06-09 20:57:32.258566] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.149 [2024-06-09 20:57:32.258670] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.149 [2024-06-09 20:57:32.258685] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:04.149 20:57:32 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.149 20:57:32 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:04.408 20:57:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:04.408 20:57:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:04.408 20:57:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.408 20:57:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:04.666 20:57:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.666 20:57:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:04.924 20:57:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:04.924 20:57:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:05.183 20:57:33 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:05.183 20:57:33 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:05.183 20:57:33 -- common/autotest_common.sh@640 -- # local es=0 00:15:05.183 20:57:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:05.183 20:57:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.183 20:57:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:05.183 20:57:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.183 20:57:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:05.183 20:57:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.183 20:57:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:05.183 20:57:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.183 20:57:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:05.183 20:57:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:05.441 [2024-06-09 20:57:33.362687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:05.441 [2024-06-09 20:57:33.364482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:05.441 [2024-06-09 20:57:33.364557] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:05.441 [2024-06-09 20:57:33.364629] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:05.441 [2024-06-09 20:57:33.364663] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.441 [2024-06-09 20:57:33.364675] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:05.441 request: 00:15:05.441 { 00:15:05.441 "name": "raid_bdev1", 00:15:05.441 "raid_level": "raid1", 00:15:05.441 "base_bdevs": [ 00:15:05.441 "malloc1", 00:15:05.441 "malloc2" 00:15:05.441 ], 00:15:05.441 "superblock": false, 00:15:05.441 "method": "bdev_raid_create", 00:15:05.441 "req_id": 1 00:15:05.441 } 00:15:05.441 Got JSON-RPC error response 00:15:05.441 response: 00:15:05.441 { 00:15:05.441 "code": -17, 00:15:05.441 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:05.441 } 00:15:05.441 20:57:33 -- common/autotest_common.sh@643 -- # es=1 00:15:05.441 20:57:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:05.441 20:57:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:05.441 20:57:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:05.441 20:57:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:05.441 20:57:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.441 20:57:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:05.441 20:57:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:05.441 20:57:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.700 [2024-06-09 20:57:33.746687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.700 [2024-06-09 20:57:33.746795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.700 [2024-06-09 20:57:33.746840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:05.700 [2024-06-09 20:57:33.746874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.700 [2024-06-09 20:57:33.749233] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.700 [2024-06-09 20:57:33.749304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.700 [2024-06-09 20:57:33.749399] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:05.700 [2024-06-09 20:57:33.749459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.700 pt1 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.700 20:57:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.958 20:57:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.958 "name": "raid_bdev1", 00:15:05.958 "uuid": "0fdd74f1-9d74-4fda-893e-004225a32efc", 00:15:05.958 "strip_size_kb": 0, 00:15:05.958 "state": "configuring", 00:15:05.958 "raid_level": "raid1", 00:15:05.958 "superblock": true, 00:15:05.958 "num_base_bdevs": 2, 00:15:05.958 "num_base_bdevs_discovered": 1, 00:15:05.958 "num_base_bdevs_operational": 2, 00:15:05.958 "base_bdevs_list": [ 00:15:05.958 { 00:15:05.958 "name": "pt1", 00:15:05.958 "uuid": "75f0abd6-e37a-50e3-a941-49259247db00", 00:15:05.958 "is_configured": true, 00:15:05.958 "data_offset": 2048, 00:15:05.958 "data_size": 63488 00:15:05.958 }, 00:15:05.958 { 00:15:05.958 "name": null, 00:15:05.958 "uuid": "bbfc5c49-0298-5a44-ad7a-9f1557908857", 00:15:05.958 "is_configured": false, 00:15:05.958 "data_offset": 2048, 00:15:05.958 "data_size": 63488 00:15:05.958 } 00:15:05.958 ] 00:15:05.958 }' 00:15:05.958 20:57:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.958 20:57:33 -- common/autotest_common.sh@10 -- # set +x 00:15:06.525 20:57:34 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:06.525 20:57:34 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:06.525 20:57:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:06.525 20:57:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.783 [2024-06-09 20:57:34.734986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.783 [2024-06-09 20:57:34.735086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.783 [2024-06-09 20:57:34.735126] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:06.783 [2024-06-09 20:57:34.735154] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.783 [2024-06-09 20:57:34.735625] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.783 [2024-06-09 20:57:34.735662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.783 [2024-06-09 20:57:34.735760] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:06.783 [2024-06-09 20:57:34.735783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.783 [2024-06-09 20:57:34.735918] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:06.783 [2024-06-09 20:57:34.735931] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:06.783 [2024-06-09 20:57:34.736038] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:06.783 [2024-06-09 20:57:34.736341] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:06.783 [2024-06-09 20:57:34.736355] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:06.784 [2024-06-09 20:57:34.736479] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.784 pt2 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.784 20:57:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.042 20:57:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.042 "name": "raid_bdev1", 00:15:07.042 "uuid": "0fdd74f1-9d74-4fda-893e-004225a32efc", 00:15:07.042 "strip_size_kb": 0, 00:15:07.042 "state": "online", 00:15:07.042 "raid_level": "raid1", 00:15:07.042 "superblock": true, 00:15:07.042 "num_base_bdevs": 2, 00:15:07.042 "num_base_bdevs_discovered": 2, 00:15:07.042 "num_base_bdevs_operational": 2, 00:15:07.042 "base_bdevs_list": [ 00:15:07.042 { 00:15:07.042 "name": "pt1", 00:15:07.042 "uuid": "75f0abd6-e37a-50e3-a941-49259247db00", 00:15:07.042 "is_configured": true, 00:15:07.042 "data_offset": 2048, 00:15:07.042 "data_size": 63488 00:15:07.042 }, 00:15:07.042 { 00:15:07.042 "name": "pt2", 00:15:07.042 "uuid": "bbfc5c49-0298-5a44-ad7a-9f1557908857", 00:15:07.042 "is_configured": true, 00:15:07.042 "data_offset": 2048, 00:15:07.042 "data_size": 63488 00:15:07.042 } 00:15:07.042 ] 00:15:07.042 }' 00:15:07.042 20:57:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.042 20:57:35 -- common/autotest_common.sh@10 -- # set +x 00:15:07.610 20:57:35 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:07.610 20:57:35 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:07.610 [2024-06-09 20:57:35.759407] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.610 20:57:35 -- bdev/bdev_raid.sh@430 -- # '[' 0fdd74f1-9d74-4fda-893e-004225a32efc '!=' 0fdd74f1-9d74-4fda-893e-004225a32efc ']' 00:15:07.610 20:57:35 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:07.610 20:57:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:07.610 20:57:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:07.610 20:57:35 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:07.869 [2024-06-09 20:57:35.955277] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.869 20:57:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.127 20:57:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.127 "name": "raid_bdev1", 00:15:08.127 "uuid": "0fdd74f1-9d74-4fda-893e-004225a32efc", 00:15:08.128 "strip_size_kb": 0, 00:15:08.128 "state": "online", 00:15:08.128 "raid_level": "raid1", 00:15:08.128 "superblock": true, 00:15:08.128 "num_base_bdevs": 2, 00:15:08.128 "num_base_bdevs_discovered": 1, 00:15:08.128 "num_base_bdevs_operational": 1, 00:15:08.128 "base_bdevs_list": [ 00:15:08.128 { 00:15:08.128 "name": null, 00:15:08.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.128 "is_configured": false, 00:15:08.128 "data_offset": 2048, 00:15:08.128 "data_size": 63488 00:15:08.128 }, 00:15:08.128 { 00:15:08.128 "name": "pt2", 00:15:08.128 "uuid": "bbfc5c49-0298-5a44-ad7a-9f1557908857", 00:15:08.128 "is_configured": true, 00:15:08.128 "data_offset": 2048, 00:15:08.128 "data_size": 63488 00:15:08.128 } 00:15:08.128 ] 00:15:08.128 }' 00:15:08.128 20:57:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.128 20:57:36 -- common/autotest_common.sh@10 -- # set +x 00:15:08.695 20:57:36 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:08.953 [2024-06-09 20:57:37.015465] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.953 [2024-06-09 20:57:37.015497] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.953 [2024-06-09 20:57:37.015569] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.953 [2024-06-09 20:57:37.015624] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.953 [2024-06-09 20:57:37.015635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:08.953 20:57:37 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.953 20:57:37 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:09.212 20:57:37 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:09.212 20:57:37 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:09.212 20:57:37 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:09.212 20:57:37 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:09.212 20:57:37 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:09.470 20:57:37 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:09.470 20:57:37 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:09.470 20:57:37 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:09.470 20:57:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:09.470 20:57:37 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:09.470 20:57:37 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.728 [2024-06-09 20:57:37.667590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.728 [2024-06-09 20:57:37.667682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.728 [2024-06-09 20:57:37.667716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:09.728 [2024-06-09 20:57:37.667751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.728 [2024-06-09 20:57:37.670158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.728 [2024-06-09 20:57:37.670212] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.728 [2024-06-09 20:57:37.670314] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:09.728 [2024-06-09 20:57:37.670371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.728 [2024-06-09 20:57:37.670535] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:15:09.728 [2024-06-09 20:57:37.670547] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.728 [2024-06-09 20:57:37.670672] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:09.728 [2024-06-09 20:57:37.671062] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:15:09.729 [2024-06-09 20:57:37.671085] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:15:09.729 [2024-06-09 20:57:37.671246] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.729 pt2 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.729 20:57:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.987 20:57:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.987 "name": "raid_bdev1", 00:15:09.987 "uuid": "0fdd74f1-9d74-4fda-893e-004225a32efc", 00:15:09.987 "strip_size_kb": 0, 00:15:09.987 "state": "online", 00:15:09.987 "raid_level": "raid1", 00:15:09.987 "superblock": true, 00:15:09.987 "num_base_bdevs": 2, 00:15:09.987 "num_base_bdevs_discovered": 1, 00:15:09.987 "num_base_bdevs_operational": 1, 00:15:09.987 "base_bdevs_list": [ 00:15:09.987 { 00:15:09.987 "name": null, 00:15:09.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.987 "is_configured": false, 00:15:09.987 "data_offset": 2048, 00:15:09.987 "data_size": 63488 00:15:09.987 }, 00:15:09.987 { 00:15:09.987 "name": "pt2", 00:15:09.987 "uuid": "bbfc5c49-0298-5a44-ad7a-9f1557908857", 00:15:09.987 "is_configured": true, 00:15:09.987 "data_offset": 2048, 00:15:09.987 "data_size": 63488 00:15:09.987 } 00:15:09.987 ] 00:15:09.987 }' 00:15:09.987 20:57:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.987 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:15:10.555 20:57:38 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:10.555 20:57:38 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.555 20:57:38 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:10.555 [2024-06-09 20:57:38.707579] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.555 20:57:38 -- bdev/bdev_raid.sh@506 -- # '[' 0fdd74f1-9d74-4fda-893e-004225a32efc '!=' 0fdd74f1-9d74-4fda-893e-004225a32efc ']' 00:15:10.555 20:57:38 -- bdev/bdev_raid.sh@511 -- # killprocess 113931 00:15:10.555 20:57:38 -- common/autotest_common.sh@926 -- # '[' -z 113931 ']' 00:15:10.555 20:57:38 -- common/autotest_common.sh@930 -- # kill -0 113931 00:15:10.555 20:57:38 -- common/autotest_common.sh@931 -- # uname 00:15:10.555 20:57:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:10.555 20:57:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113931 00:15:10.814 20:57:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:10.814 20:57:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:10.814 20:57:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113931' 00:15:10.814 killing process with pid 113931 00:15:10.814 20:57:38 -- common/autotest_common.sh@945 -- # kill 113931 00:15:10.814 [2024-06-09 20:57:38.749041] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.814 20:57:38 -- common/autotest_common.sh@950 -- # wait 113931 00:15:10.814 [2024-06-09 20:57:38.749124] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.814 [2024-06-09 20:57:38.749182] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.814 [2024-06-09 20:57:38.749194] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:15:10.814 [2024-06-09 20:57:38.886303] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.750 ************************************ 00:15:11.750 END TEST raid_superblock_test 00:15:11.750 ************************************ 00:15:11.750 20:57:39 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:11.750 00:15:11.750 real 0m11.173s 00:15:11.750 user 0m19.710s 00:15:11.750 sys 0m1.392s 00:15:11.750 20:57:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.750 20:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:12.009 20:57:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:12.009 20:57:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.009 20:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:12.009 ************************************ 00:15:12.009 START TEST raid_state_function_test 00:15:12.009 ************************************ 00:15:12.009 20:57:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=114282 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114282' 00:15:12.009 Process raid pid: 114282 00:15:12.009 20:57:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114282 /var/tmp/spdk-raid.sock 00:15:12.009 20:57:39 -- common/autotest_common.sh@819 -- # '[' -z 114282 ']' 00:15:12.009 20:57:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:12.009 20:57:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.010 20:57:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:12.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:12.010 20:57:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.010 20:57:39 -- common/autotest_common.sh@10 -- # set +x 00:15:12.010 [2024-06-09 20:57:40.039017] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:12.010 [2024-06-09 20:57:40.039218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.268 [2024-06-09 20:57:40.206027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.268 [2024-06-09 20:57:40.399048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.538 [2024-06-09 20:57:40.590151] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.121 20:57:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:13.121 20:57:40 -- common/autotest_common.sh@852 -- # return 0 00:15:13.121 20:57:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:13.121 [2024-06-09 20:57:41.218236] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.121 [2024-06-09 20:57:41.218330] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.121 [2024-06-09 20:57:41.218343] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.121 [2024-06-09 20:57:41.218363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.121 [2024-06-09 20:57:41.218370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.121 [2024-06-09 20:57:41.218412] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.121 20:57:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.380 20:57:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.380 "name": "Existed_Raid", 00:15:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.380 "strip_size_kb": 64, 00:15:13.380 "state": "configuring", 00:15:13.380 "raid_level": "raid0", 00:15:13.380 "superblock": false, 00:15:13.380 "num_base_bdevs": 3, 00:15:13.380 "num_base_bdevs_discovered": 0, 00:15:13.380 "num_base_bdevs_operational": 3, 00:15:13.380 "base_bdevs_list": [ 00:15:13.380 { 00:15:13.380 "name": "BaseBdev1", 00:15:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.380 "is_configured": false, 00:15:13.380 "data_offset": 0, 00:15:13.380 "data_size": 0 00:15:13.380 }, 00:15:13.380 { 00:15:13.380 "name": "BaseBdev2", 00:15:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.380 "is_configured": false, 00:15:13.380 "data_offset": 0, 00:15:13.380 "data_size": 0 00:15:13.380 }, 00:15:13.380 { 00:15:13.380 "name": "BaseBdev3", 00:15:13.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.380 "is_configured": false, 00:15:13.380 "data_offset": 0, 00:15:13.380 "data_size": 0 00:15:13.380 } 00:15:13.380 ] 00:15:13.380 }' 00:15:13.380 20:57:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.380 20:57:41 -- common/autotest_common.sh@10 -- # set +x 00:15:13.947 20:57:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.204 [2024-06-09 20:57:42.312065] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.204 [2024-06-09 20:57:42.312128] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:14.204 20:57:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:14.462 [2024-06-09 20:57:42.508082] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.462 [2024-06-09 20:57:42.508151] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.462 [2024-06-09 20:57:42.508179] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.462 [2024-06-09 20:57:42.508205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.462 [2024-06-09 20:57:42.508213] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.462 [2024-06-09 20:57:42.508238] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.462 20:57:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.719 [2024-06-09 20:57:42.762310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.719 BaseBdev1 00:15:14.719 20:57:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:14.719 20:57:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:14.719 20:57:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:14.719 20:57:42 -- common/autotest_common.sh@889 -- # local i 00:15:14.719 20:57:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:14.719 20:57:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:14.719 20:57:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.977 20:57:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.977 [ 00:15:14.977 { 00:15:14.977 "name": "BaseBdev1", 00:15:14.977 "aliases": [ 00:15:14.977 "c662e46d-5ded-482e-a5f3-569c07d6adfe" 00:15:14.977 ], 00:15:14.977 "product_name": "Malloc disk", 00:15:14.977 "block_size": 512, 00:15:14.977 "num_blocks": 65536, 00:15:14.977 "uuid": "c662e46d-5ded-482e-a5f3-569c07d6adfe", 00:15:14.977 "assigned_rate_limits": { 00:15:14.977 "rw_ios_per_sec": 0, 00:15:14.977 "rw_mbytes_per_sec": 0, 00:15:14.977 "r_mbytes_per_sec": 0, 00:15:14.977 "w_mbytes_per_sec": 0 00:15:14.977 }, 00:15:14.977 "claimed": true, 00:15:14.977 "claim_type": "exclusive_write", 00:15:14.977 "zoned": false, 00:15:14.977 "supported_io_types": { 00:15:14.977 "read": true, 00:15:14.977 "write": true, 00:15:14.977 "unmap": true, 00:15:14.977 "write_zeroes": true, 00:15:14.977 "flush": true, 00:15:14.977 "reset": true, 00:15:14.977 "compare": false, 00:15:14.977 "compare_and_write": false, 00:15:14.977 "abort": true, 00:15:14.977 "nvme_admin": false, 00:15:14.977 "nvme_io": false 00:15:14.977 }, 00:15:14.977 "memory_domains": [ 00:15:14.977 { 00:15:14.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.977 "dma_device_type": 2 00:15:14.977 } 00:15:14.977 ], 00:15:14.977 "driver_specific": {} 00:15:14.977 } 00:15:14.977 ] 00:15:15.236 20:57:43 -- common/autotest_common.sh@895 -- # return 0 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.236 20:57:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.494 20:57:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.494 "name": "Existed_Raid", 00:15:15.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.494 "strip_size_kb": 64, 00:15:15.494 "state": "configuring", 00:15:15.494 "raid_level": "raid0", 00:15:15.494 "superblock": false, 00:15:15.494 "num_base_bdevs": 3, 00:15:15.494 "num_base_bdevs_discovered": 1, 00:15:15.494 "num_base_bdevs_operational": 3, 00:15:15.494 "base_bdevs_list": [ 00:15:15.494 { 00:15:15.494 "name": "BaseBdev1", 00:15:15.494 "uuid": "c662e46d-5ded-482e-a5f3-569c07d6adfe", 00:15:15.494 "is_configured": true, 00:15:15.494 "data_offset": 0, 00:15:15.494 "data_size": 65536 00:15:15.494 }, 00:15:15.494 { 00:15:15.494 "name": "BaseBdev2", 00:15:15.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.494 "is_configured": false, 00:15:15.494 "data_offset": 0, 00:15:15.494 "data_size": 0 00:15:15.494 }, 00:15:15.494 { 00:15:15.494 "name": "BaseBdev3", 00:15:15.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.494 "is_configured": false, 00:15:15.494 "data_offset": 0, 00:15:15.494 "data_size": 0 00:15:15.494 } 00:15:15.494 ] 00:15:15.494 }' 00:15:15.494 20:57:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.494 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:16.061 20:57:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:16.320 [2024-06-09 20:57:44.270728] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.320 [2024-06-09 20:57:44.270809] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:16.320 20:57:44 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:16.320 20:57:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:16.579 [2024-06-09 20:57:44.526797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.579 [2024-06-09 20:57:44.528846] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:16.579 [2024-06-09 20:57:44.528907] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:16.579 [2024-06-09 20:57:44.528934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:16.579 [2024-06-09 20:57:44.528959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.579 "name": "Existed_Raid", 00:15:16.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.579 "strip_size_kb": 64, 00:15:16.579 "state": "configuring", 00:15:16.579 "raid_level": "raid0", 00:15:16.579 "superblock": false, 00:15:16.579 "num_base_bdevs": 3, 00:15:16.579 "num_base_bdevs_discovered": 1, 00:15:16.579 "num_base_bdevs_operational": 3, 00:15:16.579 "base_bdevs_list": [ 00:15:16.579 { 00:15:16.579 "name": "BaseBdev1", 00:15:16.579 "uuid": "c662e46d-5ded-482e-a5f3-569c07d6adfe", 00:15:16.579 "is_configured": true, 00:15:16.579 "data_offset": 0, 00:15:16.579 "data_size": 65536 00:15:16.579 }, 00:15:16.579 { 00:15:16.579 "name": "BaseBdev2", 00:15:16.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.579 "is_configured": false, 00:15:16.579 "data_offset": 0, 00:15:16.579 "data_size": 0 00:15:16.579 }, 00:15:16.579 { 00:15:16.579 "name": "BaseBdev3", 00:15:16.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.579 "is_configured": false, 00:15:16.579 "data_offset": 0, 00:15:16.579 "data_size": 0 00:15:16.579 } 00:15:16.579 ] 00:15:16.579 }' 00:15:16.579 20:57:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.579 20:57:44 -- common/autotest_common.sh@10 -- # set +x 00:15:17.514 20:57:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:17.514 [2024-06-09 20:57:45.582436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.514 BaseBdev2 00:15:17.514 20:57:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:17.514 20:57:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:17.514 20:57:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:17.514 20:57:45 -- common/autotest_common.sh@889 -- # local i 00:15:17.514 20:57:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:17.514 20:57:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:17.514 20:57:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.773 20:57:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.031 [ 00:15:18.031 { 00:15:18.031 "name": "BaseBdev2", 00:15:18.031 "aliases": [ 00:15:18.031 "be7371a3-2c16-4bcf-9af4-b722e0df37a7" 00:15:18.031 ], 00:15:18.031 "product_name": "Malloc disk", 00:15:18.031 "block_size": 512, 00:15:18.031 "num_blocks": 65536, 00:15:18.031 "uuid": "be7371a3-2c16-4bcf-9af4-b722e0df37a7", 00:15:18.031 "assigned_rate_limits": { 00:15:18.031 "rw_ios_per_sec": 0, 00:15:18.031 "rw_mbytes_per_sec": 0, 00:15:18.032 "r_mbytes_per_sec": 0, 00:15:18.032 "w_mbytes_per_sec": 0 00:15:18.032 }, 00:15:18.032 "claimed": true, 00:15:18.032 "claim_type": "exclusive_write", 00:15:18.032 "zoned": false, 00:15:18.032 "supported_io_types": { 00:15:18.032 "read": true, 00:15:18.032 "write": true, 00:15:18.032 "unmap": true, 00:15:18.032 "write_zeroes": true, 00:15:18.032 "flush": true, 00:15:18.032 "reset": true, 00:15:18.032 "compare": false, 00:15:18.032 "compare_and_write": false, 00:15:18.032 "abort": true, 00:15:18.032 "nvme_admin": false, 00:15:18.032 "nvme_io": false 00:15:18.032 }, 00:15:18.032 "memory_domains": [ 00:15:18.032 { 00:15:18.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.032 "dma_device_type": 2 00:15:18.032 } 00:15:18.032 ], 00:15:18.032 "driver_specific": {} 00:15:18.032 } 00:15:18.032 ] 00:15:18.032 20:57:46 -- common/autotest_common.sh@895 -- # return 0 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.032 20:57:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.291 20:57:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.291 "name": "Existed_Raid", 00:15:18.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.291 "strip_size_kb": 64, 00:15:18.291 "state": "configuring", 00:15:18.291 "raid_level": "raid0", 00:15:18.291 "superblock": false, 00:15:18.291 "num_base_bdevs": 3, 00:15:18.291 "num_base_bdevs_discovered": 2, 00:15:18.291 "num_base_bdevs_operational": 3, 00:15:18.291 "base_bdevs_list": [ 00:15:18.291 { 00:15:18.291 "name": "BaseBdev1", 00:15:18.291 "uuid": "c662e46d-5ded-482e-a5f3-569c07d6adfe", 00:15:18.291 "is_configured": true, 00:15:18.291 "data_offset": 0, 00:15:18.291 "data_size": 65536 00:15:18.291 }, 00:15:18.291 { 00:15:18.291 "name": "BaseBdev2", 00:15:18.291 "uuid": "be7371a3-2c16-4bcf-9af4-b722e0df37a7", 00:15:18.291 "is_configured": true, 00:15:18.291 "data_offset": 0, 00:15:18.291 "data_size": 65536 00:15:18.291 }, 00:15:18.291 { 00:15:18.291 "name": "BaseBdev3", 00:15:18.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.291 "is_configured": false, 00:15:18.291 "data_offset": 0, 00:15:18.291 "data_size": 0 00:15:18.291 } 00:15:18.291 ] 00:15:18.291 }' 00:15:18.291 20:57:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.291 20:57:46 -- common/autotest_common.sh@10 -- # set +x 00:15:18.859 20:57:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:19.117 [2024-06-09 20:57:47.189667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.117 [2024-06-09 20:57:47.189711] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:19.117 [2024-06-09 20:57:47.189719] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:19.117 [2024-06-09 20:57:47.189821] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:19.117 [2024-06-09 20:57:47.190186] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:19.117 [2024-06-09 20:57:47.190208] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:19.117 [2024-06-09 20:57:47.190454] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.117 BaseBdev3 00:15:19.117 20:57:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:19.117 20:57:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:19.117 20:57:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:19.117 20:57:47 -- common/autotest_common.sh@889 -- # local i 00:15:19.117 20:57:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:19.117 20:57:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:19.117 20:57:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.377 20:57:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:19.637 [ 00:15:19.637 { 00:15:19.637 "name": "BaseBdev3", 00:15:19.637 "aliases": [ 00:15:19.637 "a490e416-de95-4371-bbdd-b847d42bab2d" 00:15:19.637 ], 00:15:19.637 "product_name": "Malloc disk", 00:15:19.637 "block_size": 512, 00:15:19.637 "num_blocks": 65536, 00:15:19.637 "uuid": "a490e416-de95-4371-bbdd-b847d42bab2d", 00:15:19.637 "assigned_rate_limits": { 00:15:19.637 "rw_ios_per_sec": 0, 00:15:19.637 "rw_mbytes_per_sec": 0, 00:15:19.637 "r_mbytes_per_sec": 0, 00:15:19.637 "w_mbytes_per_sec": 0 00:15:19.637 }, 00:15:19.637 "claimed": true, 00:15:19.637 "claim_type": "exclusive_write", 00:15:19.637 "zoned": false, 00:15:19.637 "supported_io_types": { 00:15:19.637 "read": true, 00:15:19.637 "write": true, 00:15:19.637 "unmap": true, 00:15:19.637 "write_zeroes": true, 00:15:19.637 "flush": true, 00:15:19.637 "reset": true, 00:15:19.637 "compare": false, 00:15:19.637 "compare_and_write": false, 00:15:19.637 "abort": true, 00:15:19.637 "nvme_admin": false, 00:15:19.637 "nvme_io": false 00:15:19.637 }, 00:15:19.637 "memory_domains": [ 00:15:19.637 { 00:15:19.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.637 "dma_device_type": 2 00:15:19.637 } 00:15:19.637 ], 00:15:19.637 "driver_specific": {} 00:15:19.637 } 00:15:19.637 ] 00:15:19.637 20:57:47 -- common/autotest_common.sh@895 -- # return 0 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.637 20:57:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.895 20:57:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.895 "name": "Existed_Raid", 00:15:19.895 "uuid": "255896a8-0564-4c71-a614-6e8e1b23c001", 00:15:19.895 "strip_size_kb": 64, 00:15:19.895 "state": "online", 00:15:19.895 "raid_level": "raid0", 00:15:19.895 "superblock": false, 00:15:19.895 "num_base_bdevs": 3, 00:15:19.895 "num_base_bdevs_discovered": 3, 00:15:19.895 "num_base_bdevs_operational": 3, 00:15:19.895 "base_bdevs_list": [ 00:15:19.895 { 00:15:19.895 "name": "BaseBdev1", 00:15:19.895 "uuid": "c662e46d-5ded-482e-a5f3-569c07d6adfe", 00:15:19.896 "is_configured": true, 00:15:19.896 "data_offset": 0, 00:15:19.896 "data_size": 65536 00:15:19.896 }, 00:15:19.896 { 00:15:19.896 "name": "BaseBdev2", 00:15:19.896 "uuid": "be7371a3-2c16-4bcf-9af4-b722e0df37a7", 00:15:19.896 "is_configured": true, 00:15:19.896 "data_offset": 0, 00:15:19.896 "data_size": 65536 00:15:19.896 }, 00:15:19.896 { 00:15:19.896 "name": "BaseBdev3", 00:15:19.896 "uuid": "a490e416-de95-4371-bbdd-b847d42bab2d", 00:15:19.896 "is_configured": true, 00:15:19.896 "data_offset": 0, 00:15:19.896 "data_size": 65536 00:15:19.896 } 00:15:19.896 ] 00:15:19.896 }' 00:15:19.896 20:57:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.896 20:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:20.463 20:57:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:20.721 [2024-06-09 20:57:48.798272] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.721 [2024-06-09 20:57:48.798315] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.721 [2024-06-09 20:57:48.798375] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.721 20:57:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.980 20:57:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.980 "name": "Existed_Raid", 00:15:20.980 "uuid": "255896a8-0564-4c71-a614-6e8e1b23c001", 00:15:20.980 "strip_size_kb": 64, 00:15:20.980 "state": "offline", 00:15:20.980 "raid_level": "raid0", 00:15:20.980 "superblock": false, 00:15:20.980 "num_base_bdevs": 3, 00:15:20.980 "num_base_bdevs_discovered": 2, 00:15:20.980 "num_base_bdevs_operational": 2, 00:15:20.980 "base_bdevs_list": [ 00:15:20.980 { 00:15:20.980 "name": null, 00:15:20.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.980 "is_configured": false, 00:15:20.980 "data_offset": 0, 00:15:20.980 "data_size": 65536 00:15:20.980 }, 00:15:20.980 { 00:15:20.980 "name": "BaseBdev2", 00:15:20.980 "uuid": "be7371a3-2c16-4bcf-9af4-b722e0df37a7", 00:15:20.980 "is_configured": true, 00:15:20.980 "data_offset": 0, 00:15:20.980 "data_size": 65536 00:15:20.980 }, 00:15:20.980 { 00:15:20.980 "name": "BaseBdev3", 00:15:20.980 "uuid": "a490e416-de95-4371-bbdd-b847d42bab2d", 00:15:20.980 "is_configured": true, 00:15:20.980 "data_offset": 0, 00:15:20.980 "data_size": 65536 00:15:20.980 } 00:15:20.980 ] 00:15:20.980 }' 00:15:20.980 20:57:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.980 20:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:21.547 20:57:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:21.547 20:57:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:21.547 20:57:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.547 20:57:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:21.817 20:57:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:21.817 20:57:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:21.817 20:57:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:22.087 [2024-06-09 20:57:50.097191] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.087 20:57:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:22.087 20:57:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:22.087 20:57:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.087 20:57:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:22.345 20:57:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:22.345 20:57:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.345 20:57:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:22.602 [2024-06-09 20:57:50.622927] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:22.603 [2024-06-09 20:57:50.623001] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:22.603 20:57:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:22.603 20:57:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:22.603 20:57:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.603 20:57:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:22.860 20:57:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:22.860 20:57:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:22.860 20:57:50 -- bdev/bdev_raid.sh@287 -- # killprocess 114282 00:15:22.860 20:57:50 -- common/autotest_common.sh@926 -- # '[' -z 114282 ']' 00:15:22.860 20:57:50 -- common/autotest_common.sh@930 -- # kill -0 114282 00:15:22.860 20:57:50 -- common/autotest_common.sh@931 -- # uname 00:15:22.860 20:57:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:22.860 20:57:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114282 00:15:22.860 20:57:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:22.860 20:57:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:22.860 20:57:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114282' 00:15:22.860 killing process with pid 114282 00:15:22.860 20:57:50 -- common/autotest_common.sh@945 -- # kill 114282 00:15:22.860 [2024-06-09 20:57:50.922215] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.860 20:57:50 -- common/autotest_common.sh@950 -- # wait 114282 00:15:22.860 [2024-06-09 20:57:50.922319] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:23.795 00:15:23.795 real 0m11.909s 00:15:23.795 user 0m20.912s 00:15:23.795 sys 0m1.580s 00:15:23.795 20:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.795 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:15:23.795 ************************************ 00:15:23.795 END TEST raid_state_function_test 00:15:23.795 ************************************ 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:23.795 20:57:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:23.795 20:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:23.795 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:15:23.795 ************************************ 00:15:23.795 START TEST raid_state_function_test_sb 00:15:23.795 ************************************ 00:15:23.795 20:57:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=114660 00:15:23.795 Process raid pid: 114660 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114660' 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114660 /var/tmp/spdk-raid.sock 00:15:23.795 20:57:51 -- common/autotest_common.sh@819 -- # '[' -z 114660 ']' 00:15:23.795 20:57:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:23.795 20:57:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:23.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:23.795 20:57:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:23.795 20:57:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:23.795 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:15:23.795 20:57:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:24.053 [2024-06-09 20:57:51.999225] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:24.053 [2024-06-09 20:57:51.999669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.053 [2024-06-09 20:57:52.167492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.310 [2024-06-09 20:57:52.347842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.567 [2024-06-09 20:57:52.518841] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.824 20:57:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:24.824 20:57:52 -- common/autotest_common.sh@852 -- # return 0 00:15:24.824 20:57:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:25.082 [2024-06-09 20:57:53.175500] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.082 [2024-06-09 20:57:53.175592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.082 [2024-06-09 20:57:53.175622] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.082 [2024-06-09 20:57:53.175641] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.082 [2024-06-09 20:57:53.175648] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.082 [2024-06-09 20:57:53.175687] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.082 20:57:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.341 20:57:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.341 "name": "Existed_Raid", 00:15:25.341 "uuid": "8fb560de-6e8e-4b44-9c19-9eae3266440e", 00:15:25.341 "strip_size_kb": 64, 00:15:25.341 "state": "configuring", 00:15:25.341 "raid_level": "raid0", 00:15:25.341 "superblock": true, 00:15:25.341 "num_base_bdevs": 3, 00:15:25.341 "num_base_bdevs_discovered": 0, 00:15:25.341 "num_base_bdevs_operational": 3, 00:15:25.341 "base_bdevs_list": [ 00:15:25.341 { 00:15:25.341 "name": "BaseBdev1", 00:15:25.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.341 "is_configured": false, 00:15:25.341 "data_offset": 0, 00:15:25.341 "data_size": 0 00:15:25.341 }, 00:15:25.341 { 00:15:25.341 "name": "BaseBdev2", 00:15:25.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.341 "is_configured": false, 00:15:25.341 "data_offset": 0, 00:15:25.341 "data_size": 0 00:15:25.341 }, 00:15:25.341 { 00:15:25.341 "name": "BaseBdev3", 00:15:25.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.341 "is_configured": false, 00:15:25.341 "data_offset": 0, 00:15:25.341 "data_size": 0 00:15:25.341 } 00:15:25.341 ] 00:15:25.341 }' 00:15:25.341 20:57:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.341 20:57:53 -- common/autotest_common.sh@10 -- # set +x 00:15:25.907 20:57:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.165 [2024-06-09 20:57:54.147544] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.165 [2024-06-09 20:57:54.147595] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:26.165 20:57:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:26.423 [2024-06-09 20:57:54.391666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.423 [2024-06-09 20:57:54.391765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.423 [2024-06-09 20:57:54.391794] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.423 [2024-06-09 20:57:54.391821] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.423 [2024-06-09 20:57:54.391830] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.423 [2024-06-09 20:57:54.391855] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.423 20:57:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:26.681 [2024-06-09 20:57:54.618865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.681 BaseBdev1 00:15:26.681 20:57:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:26.681 20:57:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:26.681 20:57:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:26.681 20:57:54 -- common/autotest_common.sh@889 -- # local i 00:15:26.681 20:57:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:26.681 20:57:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:26.681 20:57:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.940 20:57:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:26.940 [ 00:15:26.940 { 00:15:26.940 "name": "BaseBdev1", 00:15:26.940 "aliases": [ 00:15:26.940 "e351da6b-68d7-47e1-8b9c-fabb6cee6cbb" 00:15:26.940 ], 00:15:26.940 "product_name": "Malloc disk", 00:15:26.940 "block_size": 512, 00:15:26.940 "num_blocks": 65536, 00:15:26.940 "uuid": "e351da6b-68d7-47e1-8b9c-fabb6cee6cbb", 00:15:26.940 "assigned_rate_limits": { 00:15:26.940 "rw_ios_per_sec": 0, 00:15:26.940 "rw_mbytes_per_sec": 0, 00:15:26.940 "r_mbytes_per_sec": 0, 00:15:26.940 "w_mbytes_per_sec": 0 00:15:26.940 }, 00:15:26.940 "claimed": true, 00:15:26.940 "claim_type": "exclusive_write", 00:15:26.940 "zoned": false, 00:15:26.940 "supported_io_types": { 00:15:26.940 "read": true, 00:15:26.940 "write": true, 00:15:26.940 "unmap": true, 00:15:26.940 "write_zeroes": true, 00:15:26.940 "flush": true, 00:15:26.940 "reset": true, 00:15:26.940 "compare": false, 00:15:26.940 "compare_and_write": false, 00:15:26.940 "abort": true, 00:15:26.940 "nvme_admin": false, 00:15:26.940 "nvme_io": false 00:15:26.940 }, 00:15:26.940 "memory_domains": [ 00:15:26.940 { 00:15:26.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.940 "dma_device_type": 2 00:15:26.940 } 00:15:26.940 ], 00:15:26.940 "driver_specific": {} 00:15:26.940 } 00:15:26.940 ] 00:15:26.940 20:57:55 -- common/autotest_common.sh@895 -- # return 0 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.940 20:57:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.199 20:57:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.199 "name": "Existed_Raid", 00:15:27.199 "uuid": "f7d02c31-884f-46e5-ac15-9912844a6dcc", 00:15:27.199 "strip_size_kb": 64, 00:15:27.199 "state": "configuring", 00:15:27.199 "raid_level": "raid0", 00:15:27.199 "superblock": true, 00:15:27.199 "num_base_bdevs": 3, 00:15:27.199 "num_base_bdevs_discovered": 1, 00:15:27.199 "num_base_bdevs_operational": 3, 00:15:27.199 "base_bdevs_list": [ 00:15:27.199 { 00:15:27.199 "name": "BaseBdev1", 00:15:27.199 "uuid": "e351da6b-68d7-47e1-8b9c-fabb6cee6cbb", 00:15:27.199 "is_configured": true, 00:15:27.199 "data_offset": 2048, 00:15:27.199 "data_size": 63488 00:15:27.199 }, 00:15:27.199 { 00:15:27.199 "name": "BaseBdev2", 00:15:27.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.199 "is_configured": false, 00:15:27.199 "data_offset": 0, 00:15:27.199 "data_size": 0 00:15:27.199 }, 00:15:27.199 { 00:15:27.199 "name": "BaseBdev3", 00:15:27.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.199 "is_configured": false, 00:15:27.199 "data_offset": 0, 00:15:27.199 "data_size": 0 00:15:27.199 } 00:15:27.199 ] 00:15:27.199 }' 00:15:27.199 20:57:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.199 20:57:55 -- common/autotest_common.sh@10 -- # set +x 00:15:27.766 20:57:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.025 [2024-06-09 20:57:55.943188] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.025 [2024-06-09 20:57:55.943269] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:28.025 20:57:55 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:28.025 20:57:55 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:28.284 20:57:56 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.284 BaseBdev1 00:15:28.542 20:57:56 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:28.542 20:57:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:28.542 20:57:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:28.542 20:57:56 -- common/autotest_common.sh@889 -- # local i 00:15:28.542 20:57:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:28.543 20:57:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:28.543 20:57:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.543 20:57:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.801 [ 00:15:28.801 { 00:15:28.801 "name": "BaseBdev1", 00:15:28.801 "aliases": [ 00:15:28.801 "164ad7db-7e4d-4575-ae99-c8974db356cd" 00:15:28.801 ], 00:15:28.801 "product_name": "Malloc disk", 00:15:28.801 "block_size": 512, 00:15:28.801 "num_blocks": 65536, 00:15:28.801 "uuid": "164ad7db-7e4d-4575-ae99-c8974db356cd", 00:15:28.801 "assigned_rate_limits": { 00:15:28.801 "rw_ios_per_sec": 0, 00:15:28.801 "rw_mbytes_per_sec": 0, 00:15:28.801 "r_mbytes_per_sec": 0, 00:15:28.801 "w_mbytes_per_sec": 0 00:15:28.801 }, 00:15:28.801 "claimed": false, 00:15:28.801 "zoned": false, 00:15:28.801 "supported_io_types": { 00:15:28.801 "read": true, 00:15:28.801 "write": true, 00:15:28.801 "unmap": true, 00:15:28.801 "write_zeroes": true, 00:15:28.801 "flush": true, 00:15:28.801 "reset": true, 00:15:28.801 "compare": false, 00:15:28.801 "compare_and_write": false, 00:15:28.801 "abort": true, 00:15:28.801 "nvme_admin": false, 00:15:28.801 "nvme_io": false 00:15:28.801 }, 00:15:28.801 "memory_domains": [ 00:15:28.801 { 00:15:28.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.801 "dma_device_type": 2 00:15:28.801 } 00:15:28.801 ], 00:15:28.801 "driver_specific": {} 00:15:28.801 } 00:15:28.801 ] 00:15:28.801 20:57:56 -- common/autotest_common.sh@895 -- # return 0 00:15:28.802 20:57:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:29.060 [2024-06-09 20:57:57.041987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.060 [2024-06-09 20:57:57.044111] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.060 [2024-06-09 20:57:57.044189] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.060 [2024-06-09 20:57:57.044218] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.060 [2024-06-09 20:57:57.044256] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.060 20:57:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.319 20:57:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.319 "name": "Existed_Raid", 00:15:29.319 "uuid": "f0b75826-5ac5-4698-a3b8-062374ef0aac", 00:15:29.319 "strip_size_kb": 64, 00:15:29.319 "state": "configuring", 00:15:29.319 "raid_level": "raid0", 00:15:29.319 "superblock": true, 00:15:29.319 "num_base_bdevs": 3, 00:15:29.319 "num_base_bdevs_discovered": 1, 00:15:29.319 "num_base_bdevs_operational": 3, 00:15:29.319 "base_bdevs_list": [ 00:15:29.319 { 00:15:29.319 "name": "BaseBdev1", 00:15:29.319 "uuid": "164ad7db-7e4d-4575-ae99-c8974db356cd", 00:15:29.319 "is_configured": true, 00:15:29.319 "data_offset": 2048, 00:15:29.319 "data_size": 63488 00:15:29.319 }, 00:15:29.319 { 00:15:29.319 "name": "BaseBdev2", 00:15:29.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.319 "is_configured": false, 00:15:29.319 "data_offset": 0, 00:15:29.319 "data_size": 0 00:15:29.319 }, 00:15:29.319 { 00:15:29.319 "name": "BaseBdev3", 00:15:29.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.320 "is_configured": false, 00:15:29.320 "data_offset": 0, 00:15:29.320 "data_size": 0 00:15:29.320 } 00:15:29.320 ] 00:15:29.320 }' 00:15:29.320 20:57:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.320 20:57:57 -- common/autotest_common.sh@10 -- # set +x 00:15:29.886 20:57:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.144 [2024-06-09 20:57:58.102370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.144 BaseBdev2 00:15:30.144 20:57:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:30.144 20:57:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:30.144 20:57:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:30.144 20:57:58 -- common/autotest_common.sh@889 -- # local i 00:15:30.144 20:57:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:30.144 20:57:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:30.144 20:57:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.144 20:57:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.403 [ 00:15:30.403 { 00:15:30.403 "name": "BaseBdev2", 00:15:30.403 "aliases": [ 00:15:30.403 "07e3bdcc-d55d-45fa-be78-24aca0905cc3" 00:15:30.403 ], 00:15:30.403 "product_name": "Malloc disk", 00:15:30.403 "block_size": 512, 00:15:30.403 "num_blocks": 65536, 00:15:30.403 "uuid": "07e3bdcc-d55d-45fa-be78-24aca0905cc3", 00:15:30.403 "assigned_rate_limits": { 00:15:30.403 "rw_ios_per_sec": 0, 00:15:30.403 "rw_mbytes_per_sec": 0, 00:15:30.403 "r_mbytes_per_sec": 0, 00:15:30.403 "w_mbytes_per_sec": 0 00:15:30.403 }, 00:15:30.403 "claimed": true, 00:15:30.403 "claim_type": "exclusive_write", 00:15:30.403 "zoned": false, 00:15:30.403 "supported_io_types": { 00:15:30.403 "read": true, 00:15:30.403 "write": true, 00:15:30.403 "unmap": true, 00:15:30.403 "write_zeroes": true, 00:15:30.403 "flush": true, 00:15:30.403 "reset": true, 00:15:30.403 "compare": false, 00:15:30.403 "compare_and_write": false, 00:15:30.403 "abort": true, 00:15:30.403 "nvme_admin": false, 00:15:30.403 "nvme_io": false 00:15:30.403 }, 00:15:30.403 "memory_domains": [ 00:15:30.403 { 00:15:30.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.403 "dma_device_type": 2 00:15:30.403 } 00:15:30.403 ], 00:15:30.403 "driver_specific": {} 00:15:30.403 } 00:15:30.403 ] 00:15:30.403 20:57:58 -- common/autotest_common.sh@895 -- # return 0 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.403 20:57:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.663 20:57:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.663 "name": "Existed_Raid", 00:15:30.663 "uuid": "f0b75826-5ac5-4698-a3b8-062374ef0aac", 00:15:30.663 "strip_size_kb": 64, 00:15:30.663 "state": "configuring", 00:15:30.663 "raid_level": "raid0", 00:15:30.663 "superblock": true, 00:15:30.663 "num_base_bdevs": 3, 00:15:30.663 "num_base_bdevs_discovered": 2, 00:15:30.663 "num_base_bdevs_operational": 3, 00:15:30.663 "base_bdevs_list": [ 00:15:30.663 { 00:15:30.663 "name": "BaseBdev1", 00:15:30.663 "uuid": "164ad7db-7e4d-4575-ae99-c8974db356cd", 00:15:30.663 "is_configured": true, 00:15:30.663 "data_offset": 2048, 00:15:30.663 "data_size": 63488 00:15:30.663 }, 00:15:30.663 { 00:15:30.663 "name": "BaseBdev2", 00:15:30.663 "uuid": "07e3bdcc-d55d-45fa-be78-24aca0905cc3", 00:15:30.663 "is_configured": true, 00:15:30.663 "data_offset": 2048, 00:15:30.663 "data_size": 63488 00:15:30.663 }, 00:15:30.663 { 00:15:30.663 "name": "BaseBdev3", 00:15:30.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.663 "is_configured": false, 00:15:30.663 "data_offset": 0, 00:15:30.663 "data_size": 0 00:15:30.663 } 00:15:30.663 ] 00:15:30.663 }' 00:15:30.663 20:57:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.663 20:57:58 -- common/autotest_common.sh@10 -- # set +x 00:15:31.231 20:57:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.514 [2024-06-09 20:57:59.615738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.514 [2024-06-09 20:57:59.616014] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:31.514 [2024-06-09 20:57:59.616030] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:31.514 [2024-06-09 20:57:59.616257] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:31.514 [2024-06-09 20:57:59.616596] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:31.514 [2024-06-09 20:57:59.616628] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:31.514 [2024-06-09 20:57:59.616802] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.514 BaseBdev3 00:15:31.514 20:57:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:31.514 20:57:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:31.514 20:57:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:31.514 20:57:59 -- common/autotest_common.sh@889 -- # local i 00:15:31.514 20:57:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:31.514 20:57:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:31.514 20:57:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.794 20:57:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:32.053 [ 00:15:32.053 { 00:15:32.053 "name": "BaseBdev3", 00:15:32.053 "aliases": [ 00:15:32.053 "c881e746-c9c0-4e00-94fe-de7630789e26" 00:15:32.053 ], 00:15:32.053 "product_name": "Malloc disk", 00:15:32.053 "block_size": 512, 00:15:32.053 "num_blocks": 65536, 00:15:32.053 "uuid": "c881e746-c9c0-4e00-94fe-de7630789e26", 00:15:32.053 "assigned_rate_limits": { 00:15:32.053 "rw_ios_per_sec": 0, 00:15:32.053 "rw_mbytes_per_sec": 0, 00:15:32.053 "r_mbytes_per_sec": 0, 00:15:32.053 "w_mbytes_per_sec": 0 00:15:32.053 }, 00:15:32.053 "claimed": true, 00:15:32.053 "claim_type": "exclusive_write", 00:15:32.053 "zoned": false, 00:15:32.053 "supported_io_types": { 00:15:32.053 "read": true, 00:15:32.053 "write": true, 00:15:32.053 "unmap": true, 00:15:32.053 "write_zeroes": true, 00:15:32.053 "flush": true, 00:15:32.053 "reset": true, 00:15:32.053 "compare": false, 00:15:32.053 "compare_and_write": false, 00:15:32.053 "abort": true, 00:15:32.053 "nvme_admin": false, 00:15:32.053 "nvme_io": false 00:15:32.053 }, 00:15:32.053 "memory_domains": [ 00:15:32.053 { 00:15:32.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.053 "dma_device_type": 2 00:15:32.053 } 00:15:32.053 ], 00:15:32.053 "driver_specific": {} 00:15:32.053 } 00:15:32.053 ] 00:15:32.053 20:58:00 -- common/autotest_common.sh@895 -- # return 0 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.053 20:58:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.312 20:58:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.312 "name": "Existed_Raid", 00:15:32.312 "uuid": "f0b75826-5ac5-4698-a3b8-062374ef0aac", 00:15:32.312 "strip_size_kb": 64, 00:15:32.312 "state": "online", 00:15:32.312 "raid_level": "raid0", 00:15:32.312 "superblock": true, 00:15:32.312 "num_base_bdevs": 3, 00:15:32.312 "num_base_bdevs_discovered": 3, 00:15:32.312 "num_base_bdevs_operational": 3, 00:15:32.312 "base_bdevs_list": [ 00:15:32.312 { 00:15:32.312 "name": "BaseBdev1", 00:15:32.312 "uuid": "164ad7db-7e4d-4575-ae99-c8974db356cd", 00:15:32.312 "is_configured": true, 00:15:32.312 "data_offset": 2048, 00:15:32.312 "data_size": 63488 00:15:32.312 }, 00:15:32.312 { 00:15:32.312 "name": "BaseBdev2", 00:15:32.312 "uuid": "07e3bdcc-d55d-45fa-be78-24aca0905cc3", 00:15:32.312 "is_configured": true, 00:15:32.312 "data_offset": 2048, 00:15:32.312 "data_size": 63488 00:15:32.312 }, 00:15:32.312 { 00:15:32.312 "name": "BaseBdev3", 00:15:32.312 "uuid": "c881e746-c9c0-4e00-94fe-de7630789e26", 00:15:32.312 "is_configured": true, 00:15:32.312 "data_offset": 2048, 00:15:32.312 "data_size": 63488 00:15:32.312 } 00:15:32.312 ] 00:15:32.312 }' 00:15:32.312 20:58:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.312 20:58:00 -- common/autotest_common.sh@10 -- # set +x 00:15:32.879 20:58:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:33.137 [2024-06-09 20:58:01.096204] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.137 [2024-06-09 20:58:01.096238] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.138 [2024-06-09 20:58:01.096317] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.138 20:58:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.394 20:58:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.394 "name": "Existed_Raid", 00:15:33.394 "uuid": "f0b75826-5ac5-4698-a3b8-062374ef0aac", 00:15:33.394 "strip_size_kb": 64, 00:15:33.394 "state": "offline", 00:15:33.394 "raid_level": "raid0", 00:15:33.394 "superblock": true, 00:15:33.394 "num_base_bdevs": 3, 00:15:33.394 "num_base_bdevs_discovered": 2, 00:15:33.394 "num_base_bdevs_operational": 2, 00:15:33.394 "base_bdevs_list": [ 00:15:33.394 { 00:15:33.394 "name": null, 00:15:33.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.394 "is_configured": false, 00:15:33.394 "data_offset": 2048, 00:15:33.394 "data_size": 63488 00:15:33.394 }, 00:15:33.394 { 00:15:33.394 "name": "BaseBdev2", 00:15:33.394 "uuid": "07e3bdcc-d55d-45fa-be78-24aca0905cc3", 00:15:33.394 "is_configured": true, 00:15:33.394 "data_offset": 2048, 00:15:33.394 "data_size": 63488 00:15:33.394 }, 00:15:33.394 { 00:15:33.394 "name": "BaseBdev3", 00:15:33.394 "uuid": "c881e746-c9c0-4e00-94fe-de7630789e26", 00:15:33.394 "is_configured": true, 00:15:33.394 "data_offset": 2048, 00:15:33.394 "data_size": 63488 00:15:33.394 } 00:15:33.394 ] 00:15:33.394 }' 00:15:33.394 20:58:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.394 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:15:33.959 20:58:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:33.959 20:58:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:33.959 20:58:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.959 20:58:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:34.217 20:58:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:34.217 20:58:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.217 20:58:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:34.217 [2024-06-09 20:58:02.389625] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.476 20:58:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:34.476 20:58:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.476 20:58:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:34.476 20:58:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.734 20:58:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:34.734 20:58:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.735 20:58:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:34.735 [2024-06-09 20:58:02.883800] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.735 [2024-06-09 20:58:02.883881] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:34.993 20:58:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:34.993 20:58:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.993 20:58:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.993 20:58:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:34.993 20:58:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:34.993 20:58:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:34.993 20:58:03 -- bdev/bdev_raid.sh@287 -- # killprocess 114660 00:15:34.993 20:58:03 -- common/autotest_common.sh@926 -- # '[' -z 114660 ']' 00:15:34.994 20:58:03 -- common/autotest_common.sh@930 -- # kill -0 114660 00:15:34.994 20:58:03 -- common/autotest_common.sh@931 -- # uname 00:15:34.994 20:58:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:34.994 20:58:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114660 00:15:35.252 20:58:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:35.252 20:58:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:35.252 20:58:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114660' 00:15:35.252 killing process with pid 114660 00:15:35.252 20:58:03 -- common/autotest_common.sh@945 -- # kill 114660 00:15:35.252 [2024-06-09 20:58:03.177380] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.252 [2024-06-09 20:58:03.177489] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.252 20:58:03 -- common/autotest_common.sh@950 -- # wait 114660 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:36.190 00:15:36.190 real 0m12.211s 00:15:36.190 user 0m21.537s 00:15:36.190 sys 0m1.424s 00:15:36.190 20:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.190 20:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:36.190 ************************************ 00:15:36.190 END TEST raid_state_function_test_sb 00:15:36.190 ************************************ 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:36.190 20:58:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:36.190 20:58:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:36.190 20:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:36.190 ************************************ 00:15:36.190 START TEST raid_superblock_test 00:15:36.190 ************************************ 00:15:36.190 20:58:04 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@357 -- # raid_pid=115039 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:36.190 20:58:04 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115039 /var/tmp/spdk-raid.sock 00:15:36.190 20:58:04 -- common/autotest_common.sh@819 -- # '[' -z 115039 ']' 00:15:36.190 20:58:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.190 20:58:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:36.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.190 20:58:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.190 20:58:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:36.190 20:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:36.190 [2024-06-09 20:58:04.268276] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:36.190 [2024-06-09 20:58:04.268486] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115039 ] 00:15:36.449 [2024-06-09 20:58:04.437246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.449 [2024-06-09 20:58:04.596399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.709 [2024-06-09 20:58:04.763434] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.277 20:58:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:37.277 20:58:05 -- common/autotest_common.sh@852 -- # return 0 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:37.277 malloc1 00:15:37.277 20:58:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.536 [2024-06-09 20:58:05.584346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.536 [2024-06-09 20:58:05.584478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.536 [2024-06-09 20:58:05.584516] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:37.536 [2024-06-09 20:58:05.584562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.536 [2024-06-09 20:58:05.586882] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.536 [2024-06-09 20:58:05.586935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.536 pt1 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.536 20:58:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:37.796 malloc2 00:15:37.796 20:58:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.055 [2024-06-09 20:58:06.021315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.055 [2024-06-09 20:58:06.021407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.055 [2024-06-09 20:58:06.021451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:38.055 [2024-06-09 20:58:06.021501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.055 [2024-06-09 20:58:06.023638] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.055 [2024-06-09 20:58:06.023684] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.055 pt2 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.055 20:58:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:38.055 malloc3 00:15:38.314 20:58:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.314 [2024-06-09 20:58:06.422481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.314 [2024-06-09 20:58:06.422592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.314 [2024-06-09 20:58:06.422637] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:38.314 [2024-06-09 20:58:06.422712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.314 [2024-06-09 20:58:06.425017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.314 [2024-06-09 20:58:06.425083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.314 pt3 00:15:38.314 20:58:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:38.314 20:58:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:38.314 20:58:06 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:38.573 [2024-06-09 20:58:06.618586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.573 [2024-06-09 20:58:06.620490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.573 [2024-06-09 20:58:06.620554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.573 [2024-06-09 20:58:06.620778] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:15:38.573 [2024-06-09 20:58:06.620797] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:38.574 [2024-06-09 20:58:06.620921] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:38.574 [2024-06-09 20:58:06.621264] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:15:38.574 [2024-06-09 20:58:06.621284] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:15:38.574 [2024-06-09 20:58:06.621422] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.574 20:58:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.833 20:58:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.833 "name": "raid_bdev1", 00:15:38.833 "uuid": "01c767ef-efab-4a6f-bb62-4e598e9bc325", 00:15:38.833 "strip_size_kb": 64, 00:15:38.833 "state": "online", 00:15:38.833 "raid_level": "raid0", 00:15:38.833 "superblock": true, 00:15:38.833 "num_base_bdevs": 3, 00:15:38.833 "num_base_bdevs_discovered": 3, 00:15:38.833 "num_base_bdevs_operational": 3, 00:15:38.833 "base_bdevs_list": [ 00:15:38.833 { 00:15:38.833 "name": "pt1", 00:15:38.833 "uuid": "daffb7b7-e3f5-5f31-8fdb-76b5a2fccbac", 00:15:38.833 "is_configured": true, 00:15:38.833 "data_offset": 2048, 00:15:38.833 "data_size": 63488 00:15:38.833 }, 00:15:38.833 { 00:15:38.833 "name": "pt2", 00:15:38.833 "uuid": "e0abb241-9d30-58cd-b332-a5f7e8e5b1b0", 00:15:38.833 "is_configured": true, 00:15:38.833 "data_offset": 2048, 00:15:38.833 "data_size": 63488 00:15:38.833 }, 00:15:38.833 { 00:15:38.833 "name": "pt3", 00:15:38.833 "uuid": "8deb97d6-a1c5-568b-85ea-351dc9841644", 00:15:38.833 "is_configured": true, 00:15:38.833 "data_offset": 2048, 00:15:38.833 "data_size": 63488 00:15:38.833 } 00:15:38.833 ] 00:15:38.833 }' 00:15:38.833 20:58:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.833 20:58:06 -- common/autotest_common.sh@10 -- # set +x 00:15:39.401 20:58:07 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:39.401 20:58:07 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:39.660 [2024-06-09 20:58:07.674923] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.660 20:58:07 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=01c767ef-efab-4a6f-bb62-4e598e9bc325 00:15:39.660 20:58:07 -- bdev/bdev_raid.sh@380 -- # '[' -z 01c767ef-efab-4a6f-bb62-4e598e9bc325 ']' 00:15:39.660 20:58:07 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:39.918 [2024-06-09 20:58:07.878779] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.918 [2024-06-09 20:58:07.878814] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.918 [2024-06-09 20:58:07.878914] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.919 [2024-06-09 20:58:07.879023] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.919 [2024-06-09 20:58:07.879041] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:15:39.919 20:58:07 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.919 20:58:07 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:39.919 20:58:08 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:39.919 20:58:08 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:39.919 20:58:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:39.919 20:58:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:40.178 20:58:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.178 20:58:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:40.437 20:58:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.437 20:58:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:40.696 20:58:08 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:40.696 20:58:08 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:40.696 20:58:08 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:40.696 20:58:08 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:40.696 20:58:08 -- common/autotest_common.sh@640 -- # local es=0 00:15:40.696 20:58:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:40.696 20:58:08 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.696 20:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:40.696 20:58:08 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.696 20:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:40.696 20:58:08 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.696 20:58:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:40.696 20:58:08 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.696 20:58:08 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:40.696 20:58:08 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:40.955 [2024-06-09 20:58:09.046966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:40.955 [2024-06-09 20:58:09.048836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:40.955 [2024-06-09 20:58:09.048905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:40.955 [2024-06-09 20:58:09.048958] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:40.955 [2024-06-09 20:58:09.049068] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:40.955 [2024-06-09 20:58:09.049112] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:40.955 [2024-06-09 20:58:09.049160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.955 [2024-06-09 20:58:09.049171] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:15:40.955 request: 00:15:40.955 { 00:15:40.955 "name": "raid_bdev1", 00:15:40.955 "raid_level": "raid0", 00:15:40.955 "base_bdevs": [ 00:15:40.955 "malloc1", 00:15:40.955 "malloc2", 00:15:40.955 "malloc3" 00:15:40.955 ], 00:15:40.955 "superblock": false, 00:15:40.955 "strip_size_kb": 64, 00:15:40.955 "method": "bdev_raid_create", 00:15:40.955 "req_id": 1 00:15:40.955 } 00:15:40.955 Got JSON-RPC error response 00:15:40.955 response: 00:15:40.955 { 00:15:40.955 "code": -17, 00:15:40.955 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:40.955 } 00:15:40.955 20:58:09 -- common/autotest_common.sh@643 -- # es=1 00:15:40.955 20:58:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:40.955 20:58:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:40.955 20:58:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:40.955 20:58:09 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.955 20:58:09 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:41.214 20:58:09 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:41.214 20:58:09 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:41.214 20:58:09 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:41.474 [2024-06-09 20:58:09.454978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:41.474 [2024-06-09 20:58:09.455106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.474 [2024-06-09 20:58:09.455145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:41.474 [2024-06-09 20:58:09.455167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.474 [2024-06-09 20:58:09.457443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.474 [2024-06-09 20:58:09.457505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:41.474 [2024-06-09 20:58:09.457670] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:41.474 [2024-06-09 20:58:09.457757] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.474 pt1 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.474 20:58:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.732 20:58:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.732 "name": "raid_bdev1", 00:15:41.732 "uuid": "01c767ef-efab-4a6f-bb62-4e598e9bc325", 00:15:41.732 "strip_size_kb": 64, 00:15:41.732 "state": "configuring", 00:15:41.732 "raid_level": "raid0", 00:15:41.732 "superblock": true, 00:15:41.732 "num_base_bdevs": 3, 00:15:41.733 "num_base_bdevs_discovered": 1, 00:15:41.733 "num_base_bdevs_operational": 3, 00:15:41.733 "base_bdevs_list": [ 00:15:41.733 { 00:15:41.733 "name": "pt1", 00:15:41.733 "uuid": "daffb7b7-e3f5-5f31-8fdb-76b5a2fccbac", 00:15:41.733 "is_configured": true, 00:15:41.733 "data_offset": 2048, 00:15:41.733 "data_size": 63488 00:15:41.733 }, 00:15:41.733 { 00:15:41.733 "name": null, 00:15:41.733 "uuid": "e0abb241-9d30-58cd-b332-a5f7e8e5b1b0", 00:15:41.733 "is_configured": false, 00:15:41.733 "data_offset": 2048, 00:15:41.733 "data_size": 63488 00:15:41.733 }, 00:15:41.733 { 00:15:41.733 "name": null, 00:15:41.733 "uuid": "8deb97d6-a1c5-568b-85ea-351dc9841644", 00:15:41.733 "is_configured": false, 00:15:41.733 "data_offset": 2048, 00:15:41.733 "data_size": 63488 00:15:41.733 } 00:15:41.733 ] 00:15:41.733 }' 00:15:41.733 20:58:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.733 20:58:09 -- common/autotest_common.sh@10 -- # set +x 00:15:42.300 20:58:10 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:42.300 20:58:10 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.559 [2024-06-09 20:58:10.491282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.559 [2024-06-09 20:58:10.491383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.559 [2024-06-09 20:58:10.491431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:42.559 [2024-06-09 20:58:10.491452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.559 [2024-06-09 20:58:10.491930] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.559 [2024-06-09 20:58:10.491969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.559 [2024-06-09 20:58:10.492092] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:42.559 [2024-06-09 20:58:10.492116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.559 pt2 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.559 [2024-06-09 20:58:10.695324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.559 20:58:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.817 20:58:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.817 "name": "raid_bdev1", 00:15:42.817 "uuid": "01c767ef-efab-4a6f-bb62-4e598e9bc325", 00:15:42.817 "strip_size_kb": 64, 00:15:42.817 "state": "configuring", 00:15:42.817 "raid_level": "raid0", 00:15:42.817 "superblock": true, 00:15:42.817 "num_base_bdevs": 3, 00:15:42.817 "num_base_bdevs_discovered": 1, 00:15:42.817 "num_base_bdevs_operational": 3, 00:15:42.817 "base_bdevs_list": [ 00:15:42.817 { 00:15:42.817 "name": "pt1", 00:15:42.817 "uuid": "daffb7b7-e3f5-5f31-8fdb-76b5a2fccbac", 00:15:42.817 "is_configured": true, 00:15:42.817 "data_offset": 2048, 00:15:42.817 "data_size": 63488 00:15:42.817 }, 00:15:42.817 { 00:15:42.817 "name": null, 00:15:42.817 "uuid": "e0abb241-9d30-58cd-b332-a5f7e8e5b1b0", 00:15:42.817 "is_configured": false, 00:15:42.817 "data_offset": 2048, 00:15:42.817 "data_size": 63488 00:15:42.818 }, 00:15:42.818 { 00:15:42.818 "name": null, 00:15:42.818 "uuid": "8deb97d6-a1c5-568b-85ea-351dc9841644", 00:15:42.818 "is_configured": false, 00:15:42.818 "data_offset": 2048, 00:15:42.818 "data_size": 63488 00:15:42.818 } 00:15:42.818 ] 00:15:42.818 }' 00:15:42.818 20:58:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.818 20:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:43.384 20:58:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:43.384 20:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.384 20:58:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.643 [2024-06-09 20:58:11.643482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.643 [2024-06-09 20:58:11.643602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.643 [2024-06-09 20:58:11.643641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:43.643 [2024-06-09 20:58:11.643671] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.643 [2024-06-09 20:58:11.644246] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.643 [2024-06-09 20:58:11.644316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.643 [2024-06-09 20:58:11.644440] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:43.643 [2024-06-09 20:58:11.644465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.643 pt2 00:15:43.643 20:58:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:43.643 20:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.643 20:58:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.902 [2024-06-09 20:58:11.895550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.902 [2024-06-09 20:58:11.895667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.902 [2024-06-09 20:58:11.895705] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:43.902 [2024-06-09 20:58:11.895732] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.902 [2024-06-09 20:58:11.896248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.902 [2024-06-09 20:58:11.896303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.902 [2024-06-09 20:58:11.896467] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:43.902 [2024-06-09 20:58:11.896493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.902 [2024-06-09 20:58:11.896621] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:15:43.902 [2024-06-09 20:58:11.896633] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:43.902 [2024-06-09 20:58:11.896736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:43.902 [2024-06-09 20:58:11.897076] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:15:43.902 [2024-06-09 20:58:11.897100] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:15:43.902 [2024-06-09 20:58:11.897231] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.902 pt3 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.902 20:58:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.161 20:58:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.161 "name": "raid_bdev1", 00:15:44.161 "uuid": "01c767ef-efab-4a6f-bb62-4e598e9bc325", 00:15:44.161 "strip_size_kb": 64, 00:15:44.161 "state": "online", 00:15:44.161 "raid_level": "raid0", 00:15:44.161 "superblock": true, 00:15:44.161 "num_base_bdevs": 3, 00:15:44.161 "num_base_bdevs_discovered": 3, 00:15:44.161 "num_base_bdevs_operational": 3, 00:15:44.161 "base_bdevs_list": [ 00:15:44.161 { 00:15:44.161 "name": "pt1", 00:15:44.161 "uuid": "daffb7b7-e3f5-5f31-8fdb-76b5a2fccbac", 00:15:44.161 "is_configured": true, 00:15:44.161 "data_offset": 2048, 00:15:44.161 "data_size": 63488 00:15:44.161 }, 00:15:44.161 { 00:15:44.161 "name": "pt2", 00:15:44.161 "uuid": "e0abb241-9d30-58cd-b332-a5f7e8e5b1b0", 00:15:44.161 "is_configured": true, 00:15:44.161 "data_offset": 2048, 00:15:44.161 "data_size": 63488 00:15:44.161 }, 00:15:44.161 { 00:15:44.161 "name": "pt3", 00:15:44.161 "uuid": "8deb97d6-a1c5-568b-85ea-351dc9841644", 00:15:44.161 "is_configured": true, 00:15:44.161 "data_offset": 2048, 00:15:44.161 "data_size": 63488 00:15:44.161 } 00:15:44.161 ] 00:15:44.161 }' 00:15:44.161 20:58:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.161 20:58:12 -- common/autotest_common.sh@10 -- # set +x 00:15:44.728 20:58:12 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:44.728 20:58:12 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:44.986 [2024-06-09 20:58:12.915930] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.986 20:58:12 -- bdev/bdev_raid.sh@430 -- # '[' 01c767ef-efab-4a6f-bb62-4e598e9bc325 '!=' 01c767ef-efab-4a6f-bb62-4e598e9bc325 ']' 00:15:44.987 20:58:12 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:44.987 20:58:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:44.987 20:58:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:44.987 20:58:12 -- bdev/bdev_raid.sh@511 -- # killprocess 115039 00:15:44.987 20:58:12 -- common/autotest_common.sh@926 -- # '[' -z 115039 ']' 00:15:44.987 20:58:12 -- common/autotest_common.sh@930 -- # kill -0 115039 00:15:44.987 20:58:12 -- common/autotest_common.sh@931 -- # uname 00:15:44.987 20:58:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:44.987 20:58:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115039 00:15:44.987 killing process with pid 115039 00:15:44.987 20:58:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:44.987 20:58:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:44.987 20:58:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115039' 00:15:44.987 20:58:12 -- common/autotest_common.sh@945 -- # kill 115039 00:15:44.987 [2024-06-09 20:58:12.953856] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.987 20:58:12 -- common/autotest_common.sh@950 -- # wait 115039 00:15:44.987 [2024-06-09 20:58:12.953937] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.987 [2024-06-09 20:58:12.953995] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.987 [2024-06-09 20:58:12.954005] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:15:44.987 [2024-06-09 20:58:13.157611] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.922 ************************************ 00:15:45.922 END TEST raid_superblock_test 00:15:45.922 ************************************ 00:15:45.922 20:58:14 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:45.922 00:15:45.922 real 0m9.897s 00:15:45.922 user 0m17.173s 00:15:45.922 sys 0m1.168s 00:15:45.922 20:58:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.922 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:46.180 20:58:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:46.180 20:58:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.180 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 ************************************ 00:15:46.180 START TEST raid_state_function_test 00:15:46.180 ************************************ 00:15:46.180 20:58:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:46.180 20:58:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=115345 00:15:46.181 Process raid pid: 115345 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115345' 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115345 /var/tmp/spdk-raid.sock 00:15:46.181 20:58:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:46.181 20:58:14 -- common/autotest_common.sh@819 -- # '[' -z 115345 ']' 00:15:46.181 20:58:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:46.181 20:58:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:46.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:46.181 20:58:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:46.181 20:58:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:46.181 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:15:46.181 [2024-06-09 20:58:14.209965] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:46.181 [2024-06-09 20:58:14.210203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.439 [2024-06-09 20:58:14.376877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.439 [2024-06-09 20:58:14.556792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.698 [2024-06-09 20:58:14.729578] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.956 20:58:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:46.956 20:58:15 -- common/autotest_common.sh@852 -- # return 0 00:15:46.956 20:58:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:47.215 [2024-06-09 20:58:15.311440] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.215 [2024-06-09 20:58:15.311510] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.215 [2024-06-09 20:58:15.311540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.215 [2024-06-09 20:58:15.311558] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.215 [2024-06-09 20:58:15.311565] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:47.215 [2024-06-09 20:58:15.311604] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.215 20:58:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.473 20:58:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.473 "name": "Existed_Raid", 00:15:47.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.473 "strip_size_kb": 64, 00:15:47.473 "state": "configuring", 00:15:47.473 "raid_level": "concat", 00:15:47.473 "superblock": false, 00:15:47.473 "num_base_bdevs": 3, 00:15:47.473 "num_base_bdevs_discovered": 0, 00:15:47.473 "num_base_bdevs_operational": 3, 00:15:47.473 "base_bdevs_list": [ 00:15:47.473 { 00:15:47.473 "name": "BaseBdev1", 00:15:47.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.473 "is_configured": false, 00:15:47.473 "data_offset": 0, 00:15:47.473 "data_size": 0 00:15:47.473 }, 00:15:47.473 { 00:15:47.473 "name": "BaseBdev2", 00:15:47.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.473 "is_configured": false, 00:15:47.473 "data_offset": 0, 00:15:47.473 "data_size": 0 00:15:47.473 }, 00:15:47.473 { 00:15:47.473 "name": "BaseBdev3", 00:15:47.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.473 "is_configured": false, 00:15:47.473 "data_offset": 0, 00:15:47.473 "data_size": 0 00:15:47.473 } 00:15:47.473 ] 00:15:47.473 }' 00:15:47.473 20:58:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.474 20:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:48.077 20:58:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.335 [2024-06-09 20:58:16.267529] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.335 [2024-06-09 20:58:16.267567] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:48.335 20:58:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:48.594 [2024-06-09 20:58:16.515621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.594 [2024-06-09 20:58:16.515684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.594 [2024-06-09 20:58:16.515714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.594 [2024-06-09 20:58:16.515740] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.594 [2024-06-09 20:58:16.515748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:48.594 [2024-06-09 20:58:16.515773] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:48.594 20:58:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.594 [2024-06-09 20:58:16.741340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.594 BaseBdev1 00:15:48.594 20:58:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:48.594 20:58:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:48.594 20:58:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:48.594 20:58:16 -- common/autotest_common.sh@889 -- # local i 00:15:48.594 20:58:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:48.594 20:58:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:48.594 20:58:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.853 20:58:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.111 [ 00:15:49.111 { 00:15:49.111 "name": "BaseBdev1", 00:15:49.111 "aliases": [ 00:15:49.111 "9bcdb00b-d686-4aab-89c2-0ada19f85eb8" 00:15:49.111 ], 00:15:49.111 "product_name": "Malloc disk", 00:15:49.111 "block_size": 512, 00:15:49.111 "num_blocks": 65536, 00:15:49.111 "uuid": "9bcdb00b-d686-4aab-89c2-0ada19f85eb8", 00:15:49.111 "assigned_rate_limits": { 00:15:49.111 "rw_ios_per_sec": 0, 00:15:49.111 "rw_mbytes_per_sec": 0, 00:15:49.111 "r_mbytes_per_sec": 0, 00:15:49.111 "w_mbytes_per_sec": 0 00:15:49.111 }, 00:15:49.111 "claimed": true, 00:15:49.111 "claim_type": "exclusive_write", 00:15:49.111 "zoned": false, 00:15:49.111 "supported_io_types": { 00:15:49.111 "read": true, 00:15:49.111 "write": true, 00:15:49.111 "unmap": true, 00:15:49.111 "write_zeroes": true, 00:15:49.111 "flush": true, 00:15:49.111 "reset": true, 00:15:49.111 "compare": false, 00:15:49.111 "compare_and_write": false, 00:15:49.111 "abort": true, 00:15:49.111 "nvme_admin": false, 00:15:49.111 "nvme_io": false 00:15:49.111 }, 00:15:49.111 "memory_domains": [ 00:15:49.111 { 00:15:49.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.111 "dma_device_type": 2 00:15:49.111 } 00:15:49.111 ], 00:15:49.111 "driver_specific": {} 00:15:49.111 } 00:15:49.111 ] 00:15:49.111 20:58:17 -- common/autotest_common.sh@895 -- # return 0 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.111 20:58:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.370 20:58:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.370 "name": "Existed_Raid", 00:15:49.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.370 "strip_size_kb": 64, 00:15:49.370 "state": "configuring", 00:15:49.370 "raid_level": "concat", 00:15:49.370 "superblock": false, 00:15:49.370 "num_base_bdevs": 3, 00:15:49.370 "num_base_bdevs_discovered": 1, 00:15:49.370 "num_base_bdevs_operational": 3, 00:15:49.370 "base_bdevs_list": [ 00:15:49.370 { 00:15:49.370 "name": "BaseBdev1", 00:15:49.370 "uuid": "9bcdb00b-d686-4aab-89c2-0ada19f85eb8", 00:15:49.370 "is_configured": true, 00:15:49.370 "data_offset": 0, 00:15:49.370 "data_size": 65536 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "name": "BaseBdev2", 00:15:49.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.370 "is_configured": false, 00:15:49.370 "data_offset": 0, 00:15:49.370 "data_size": 0 00:15:49.370 }, 00:15:49.370 { 00:15:49.370 "name": "BaseBdev3", 00:15:49.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.370 "is_configured": false, 00:15:49.370 "data_offset": 0, 00:15:49.370 "data_size": 0 00:15:49.370 } 00:15:49.370 ] 00:15:49.370 }' 00:15:49.370 20:58:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.370 20:58:17 -- common/autotest_common.sh@10 -- # set +x 00:15:49.936 20:58:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:50.195 [2024-06-09 20:58:18.205708] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.195 [2024-06-09 20:58:18.205776] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:50.195 20:58:18 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:50.195 20:58:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:50.453 [2024-06-09 20:58:18.453809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.453 [2024-06-09 20:58:18.455853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.453 [2024-06-09 20:58:18.455947] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.453 [2024-06-09 20:58:18.455976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.453 [2024-06-09 20:58:18.456019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.453 20:58:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.712 20:58:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.712 "name": "Existed_Raid", 00:15:50.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.712 "strip_size_kb": 64, 00:15:50.712 "state": "configuring", 00:15:50.712 "raid_level": "concat", 00:15:50.712 "superblock": false, 00:15:50.712 "num_base_bdevs": 3, 00:15:50.712 "num_base_bdevs_discovered": 1, 00:15:50.712 "num_base_bdevs_operational": 3, 00:15:50.712 "base_bdevs_list": [ 00:15:50.712 { 00:15:50.712 "name": "BaseBdev1", 00:15:50.712 "uuid": "9bcdb00b-d686-4aab-89c2-0ada19f85eb8", 00:15:50.712 "is_configured": true, 00:15:50.712 "data_offset": 0, 00:15:50.712 "data_size": 65536 00:15:50.712 }, 00:15:50.712 { 00:15:50.712 "name": "BaseBdev2", 00:15:50.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.712 "is_configured": false, 00:15:50.712 "data_offset": 0, 00:15:50.712 "data_size": 0 00:15:50.712 }, 00:15:50.712 { 00:15:50.712 "name": "BaseBdev3", 00:15:50.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.712 "is_configured": false, 00:15:50.712 "data_offset": 0, 00:15:50.712 "data_size": 0 00:15:50.712 } 00:15:50.712 ] 00:15:50.712 }' 00:15:50.712 20:58:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.712 20:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:51.285 20:58:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.544 [2024-06-09 20:58:19.533951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.544 BaseBdev2 00:15:51.544 20:58:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:51.544 20:58:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:51.544 20:58:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:51.544 20:58:19 -- common/autotest_common.sh@889 -- # local i 00:15:51.544 20:58:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:51.544 20:58:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:51.544 20:58:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.803 20:58:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.803 [ 00:15:51.803 { 00:15:51.803 "name": "BaseBdev2", 00:15:51.803 "aliases": [ 00:15:51.803 "6009bcfd-feda-45fe-8525-d1a2fcd30eec" 00:15:51.803 ], 00:15:51.803 "product_name": "Malloc disk", 00:15:51.803 "block_size": 512, 00:15:51.803 "num_blocks": 65536, 00:15:51.803 "uuid": "6009bcfd-feda-45fe-8525-d1a2fcd30eec", 00:15:51.803 "assigned_rate_limits": { 00:15:51.803 "rw_ios_per_sec": 0, 00:15:51.803 "rw_mbytes_per_sec": 0, 00:15:51.803 "r_mbytes_per_sec": 0, 00:15:51.803 "w_mbytes_per_sec": 0 00:15:51.803 }, 00:15:51.803 "claimed": true, 00:15:51.803 "claim_type": "exclusive_write", 00:15:51.803 "zoned": false, 00:15:51.803 "supported_io_types": { 00:15:51.803 "read": true, 00:15:51.803 "write": true, 00:15:51.803 "unmap": true, 00:15:51.803 "write_zeroes": true, 00:15:51.803 "flush": true, 00:15:51.803 "reset": true, 00:15:51.803 "compare": false, 00:15:51.803 "compare_and_write": false, 00:15:51.803 "abort": true, 00:15:51.803 "nvme_admin": false, 00:15:51.803 "nvme_io": false 00:15:51.803 }, 00:15:51.803 "memory_domains": [ 00:15:51.803 { 00:15:51.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.803 "dma_device_type": 2 00:15:51.803 } 00:15:51.803 ], 00:15:51.803 "driver_specific": {} 00:15:51.803 } 00:15:51.803 ] 00:15:51.803 20:58:19 -- common/autotest_common.sh@895 -- # return 0 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.803 20:58:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.063 20:58:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.063 "name": "Existed_Raid", 00:15:52.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.063 "strip_size_kb": 64, 00:15:52.063 "state": "configuring", 00:15:52.063 "raid_level": "concat", 00:15:52.063 "superblock": false, 00:15:52.063 "num_base_bdevs": 3, 00:15:52.063 "num_base_bdevs_discovered": 2, 00:15:52.063 "num_base_bdevs_operational": 3, 00:15:52.063 "base_bdevs_list": [ 00:15:52.063 { 00:15:52.063 "name": "BaseBdev1", 00:15:52.063 "uuid": "9bcdb00b-d686-4aab-89c2-0ada19f85eb8", 00:15:52.063 "is_configured": true, 00:15:52.063 "data_offset": 0, 00:15:52.063 "data_size": 65536 00:15:52.063 }, 00:15:52.063 { 00:15:52.063 "name": "BaseBdev2", 00:15:52.063 "uuid": "6009bcfd-feda-45fe-8525-d1a2fcd30eec", 00:15:52.063 "is_configured": true, 00:15:52.063 "data_offset": 0, 00:15:52.063 "data_size": 65536 00:15:52.063 }, 00:15:52.063 { 00:15:52.063 "name": "BaseBdev3", 00:15:52.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.063 "is_configured": false, 00:15:52.063 "data_offset": 0, 00:15:52.063 "data_size": 0 00:15:52.063 } 00:15:52.063 ] 00:15:52.063 }' 00:15:52.063 20:58:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.063 20:58:20 -- common/autotest_common.sh@10 -- # set +x 00:15:52.632 20:58:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:52.891 [2024-06-09 20:58:20.986097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.891 [2024-06-09 20:58:20.986150] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:52.891 [2024-06-09 20:58:20.986160] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:52.891 [2024-06-09 20:58:20.986285] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:52.891 [2024-06-09 20:58:20.986672] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:52.891 [2024-06-09 20:58:20.986706] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:52.891 [2024-06-09 20:58:20.987019] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.891 BaseBdev3 00:15:52.891 20:58:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:52.891 20:58:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:52.891 20:58:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:52.891 20:58:20 -- common/autotest_common.sh@889 -- # local i 00:15:52.891 20:58:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:52.891 20:58:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:52.891 20:58:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.150 20:58:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:53.409 [ 00:15:53.409 { 00:15:53.409 "name": "BaseBdev3", 00:15:53.409 "aliases": [ 00:15:53.409 "a15ce771-7686-40ea-b570-61dd271a623d" 00:15:53.409 ], 00:15:53.409 "product_name": "Malloc disk", 00:15:53.409 "block_size": 512, 00:15:53.409 "num_blocks": 65536, 00:15:53.409 "uuid": "a15ce771-7686-40ea-b570-61dd271a623d", 00:15:53.409 "assigned_rate_limits": { 00:15:53.409 "rw_ios_per_sec": 0, 00:15:53.409 "rw_mbytes_per_sec": 0, 00:15:53.409 "r_mbytes_per_sec": 0, 00:15:53.409 "w_mbytes_per_sec": 0 00:15:53.409 }, 00:15:53.409 "claimed": true, 00:15:53.409 "claim_type": "exclusive_write", 00:15:53.409 "zoned": false, 00:15:53.409 "supported_io_types": { 00:15:53.409 "read": true, 00:15:53.409 "write": true, 00:15:53.409 "unmap": true, 00:15:53.409 "write_zeroes": true, 00:15:53.409 "flush": true, 00:15:53.409 "reset": true, 00:15:53.409 "compare": false, 00:15:53.409 "compare_and_write": false, 00:15:53.409 "abort": true, 00:15:53.409 "nvme_admin": false, 00:15:53.409 "nvme_io": false 00:15:53.409 }, 00:15:53.409 "memory_domains": [ 00:15:53.409 { 00:15:53.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.409 "dma_device_type": 2 00:15:53.409 } 00:15:53.409 ], 00:15:53.409 "driver_specific": {} 00:15:53.409 } 00:15:53.409 ] 00:15:53.409 20:58:21 -- common/autotest_common.sh@895 -- # return 0 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.409 20:58:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.668 20:58:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.668 "name": "Existed_Raid", 00:15:53.668 "uuid": "6df902f2-1088-4835-8bcb-9b76729b43b8", 00:15:53.668 "strip_size_kb": 64, 00:15:53.668 "state": "online", 00:15:53.668 "raid_level": "concat", 00:15:53.668 "superblock": false, 00:15:53.668 "num_base_bdevs": 3, 00:15:53.668 "num_base_bdevs_discovered": 3, 00:15:53.668 "num_base_bdevs_operational": 3, 00:15:53.668 "base_bdevs_list": [ 00:15:53.668 { 00:15:53.668 "name": "BaseBdev1", 00:15:53.668 "uuid": "9bcdb00b-d686-4aab-89c2-0ada19f85eb8", 00:15:53.668 "is_configured": true, 00:15:53.668 "data_offset": 0, 00:15:53.668 "data_size": 65536 00:15:53.668 }, 00:15:53.668 { 00:15:53.668 "name": "BaseBdev2", 00:15:53.668 "uuid": "6009bcfd-feda-45fe-8525-d1a2fcd30eec", 00:15:53.668 "is_configured": true, 00:15:53.668 "data_offset": 0, 00:15:53.668 "data_size": 65536 00:15:53.668 }, 00:15:53.668 { 00:15:53.668 "name": "BaseBdev3", 00:15:53.668 "uuid": "a15ce771-7686-40ea-b570-61dd271a623d", 00:15:53.668 "is_configured": true, 00:15:53.668 "data_offset": 0, 00:15:53.668 "data_size": 65536 00:15:53.668 } 00:15:53.668 ] 00:15:53.668 }' 00:15:53.668 20:58:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.668 20:58:21 -- common/autotest_common.sh@10 -- # set +x 00:15:54.236 20:58:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:54.494 [2024-06-09 20:58:22.522528] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.494 [2024-06-09 20:58:22.522570] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.494 [2024-06-09 20:58:22.522652] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.494 20:58:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.752 20:58:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.752 "name": "Existed_Raid", 00:15:54.752 "uuid": "6df902f2-1088-4835-8bcb-9b76729b43b8", 00:15:54.752 "strip_size_kb": 64, 00:15:54.752 "state": "offline", 00:15:54.752 "raid_level": "concat", 00:15:54.752 "superblock": false, 00:15:54.752 "num_base_bdevs": 3, 00:15:54.752 "num_base_bdevs_discovered": 2, 00:15:54.752 "num_base_bdevs_operational": 2, 00:15:54.752 "base_bdevs_list": [ 00:15:54.752 { 00:15:54.752 "name": null, 00:15:54.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.752 "is_configured": false, 00:15:54.752 "data_offset": 0, 00:15:54.752 "data_size": 65536 00:15:54.752 }, 00:15:54.752 { 00:15:54.752 "name": "BaseBdev2", 00:15:54.752 "uuid": "6009bcfd-feda-45fe-8525-d1a2fcd30eec", 00:15:54.752 "is_configured": true, 00:15:54.753 "data_offset": 0, 00:15:54.753 "data_size": 65536 00:15:54.753 }, 00:15:54.753 { 00:15:54.753 "name": "BaseBdev3", 00:15:54.753 "uuid": "a15ce771-7686-40ea-b570-61dd271a623d", 00:15:54.753 "is_configured": true, 00:15:54.753 "data_offset": 0, 00:15:54.753 "data_size": 65536 00:15:54.753 } 00:15:54.753 ] 00:15:54.753 }' 00:15:54.753 20:58:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.753 20:58:22 -- common/autotest_common.sh@10 -- # set +x 00:15:55.687 20:58:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:55.687 20:58:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.687 20:58:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.687 20:58:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:55.687 20:58:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:55.687 20:58:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.687 20:58:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:55.946 [2024-06-09 20:58:23.960395] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.946 20:58:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:55.946 20:58:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.946 20:58:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.946 20:58:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:56.205 20:58:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:56.205 20:58:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.205 20:58:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:56.464 [2024-06-09 20:58:24.470250] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:56.464 [2024-06-09 20:58:24.470343] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:56.464 20:58:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:56.464 20:58:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:56.464 20:58:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.464 20:58:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:56.723 20:58:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:56.723 20:58:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:56.723 20:58:24 -- bdev/bdev_raid.sh@287 -- # killprocess 115345 00:15:56.723 20:58:24 -- common/autotest_common.sh@926 -- # '[' -z 115345 ']' 00:15:56.723 20:58:24 -- common/autotest_common.sh@930 -- # kill -0 115345 00:15:56.723 20:58:24 -- common/autotest_common.sh@931 -- # uname 00:15:56.723 20:58:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:56.723 20:58:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115345 00:15:56.723 killing process with pid 115345 00:15:56.723 20:58:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:56.723 20:58:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:56.723 20:58:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115345' 00:15:56.723 20:58:24 -- common/autotest_common.sh@945 -- # kill 115345 00:15:56.723 20:58:24 -- common/autotest_common.sh@950 -- # wait 115345 00:15:56.723 [2024-06-09 20:58:24.753706] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.723 [2024-06-09 20:58:24.753843] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.660 ************************************ 00:15:57.660 END TEST raid_state_function_test 00:15:57.660 ************************************ 00:15:57.660 20:58:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:57.660 00:15:57.660 real 0m11.688s 00:15:57.660 user 0m20.648s 00:15:57.660 sys 0m1.321s 00:15:57.660 20:58:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.660 20:58:25 -- common/autotest_common.sh@10 -- # set +x 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:57.920 20:58:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:57.920 20:58:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.920 20:58:25 -- common/autotest_common.sh@10 -- # set +x 00:15:57.920 ************************************ 00:15:57.920 START TEST raid_state_function_test_sb 00:15:57.920 ************************************ 00:15:57.920 20:58:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=115722 00:15:57.920 Process raid pid: 115722 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115722' 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115722 /var/tmp/spdk-raid.sock 00:15:57.920 20:58:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:57.920 20:58:25 -- common/autotest_common.sh@819 -- # '[' -z 115722 ']' 00:15:57.920 20:58:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.920 20:58:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:57.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.920 20:58:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.920 20:58:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:57.920 20:58:25 -- common/autotest_common.sh@10 -- # set +x 00:15:57.920 [2024-06-09 20:58:25.971759] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:57.920 [2024-06-09 20:58:25.971980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.179 [2024-06-09 20:58:26.138609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.439 [2024-06-09 20:58:26.358184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.439 [2024-06-09 20:58:26.558278] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.697 20:58:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:58.697 20:58:26 -- common/autotest_common.sh@852 -- # return 0 00:15:58.697 20:58:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:58.957 [2024-06-09 20:58:27.074027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.957 [2024-06-09 20:58:27.074149] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.957 [2024-06-09 20:58:27.074164] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.957 [2024-06-09 20:58:27.074183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.957 [2024-06-09 20:58:27.074191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.957 [2024-06-09 20:58:27.074239] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.957 20:58:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.216 20:58:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.216 "name": "Existed_Raid", 00:15:59.216 "uuid": "0f559542-9aae-4188-a756-8084cfe59abe", 00:15:59.216 "strip_size_kb": 64, 00:15:59.216 "state": "configuring", 00:15:59.216 "raid_level": "concat", 00:15:59.216 "superblock": true, 00:15:59.216 "num_base_bdevs": 3, 00:15:59.216 "num_base_bdevs_discovered": 0, 00:15:59.216 "num_base_bdevs_operational": 3, 00:15:59.216 "base_bdevs_list": [ 00:15:59.216 { 00:15:59.216 "name": "BaseBdev1", 00:15:59.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.216 "is_configured": false, 00:15:59.216 "data_offset": 0, 00:15:59.216 "data_size": 0 00:15:59.216 }, 00:15:59.216 { 00:15:59.216 "name": "BaseBdev2", 00:15:59.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.216 "is_configured": false, 00:15:59.216 "data_offset": 0, 00:15:59.216 "data_size": 0 00:15:59.216 }, 00:15:59.216 { 00:15:59.216 "name": "BaseBdev3", 00:15:59.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.216 "is_configured": false, 00:15:59.216 "data_offset": 0, 00:15:59.216 "data_size": 0 00:15:59.216 } 00:15:59.216 ] 00:15:59.216 }' 00:15:59.216 20:58:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.216 20:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:59.783 20:58:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.042 [2024-06-09 20:58:28.186122] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.042 [2024-06-09 20:58:28.186183] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:00.042 20:58:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:00.301 [2024-06-09 20:58:28.430166] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.301 [2024-06-09 20:58:28.430242] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.301 [2024-06-09 20:58:28.430255] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.301 [2024-06-09 20:58:28.430284] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.301 [2024-06-09 20:58:28.430293] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.301 [2024-06-09 20:58:28.430321] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.301 20:58:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.632 [2024-06-09 20:58:28.643745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.632 BaseBdev1 00:16:00.632 20:58:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:00.632 20:58:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:00.632 20:58:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:00.632 20:58:28 -- common/autotest_common.sh@889 -- # local i 00:16:00.632 20:58:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:00.632 20:58:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:00.632 20:58:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.891 20:58:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.150 [ 00:16:01.150 { 00:16:01.150 "name": "BaseBdev1", 00:16:01.150 "aliases": [ 00:16:01.150 "e1f1ad22-5267-4107-8ec0-f525fa08c889" 00:16:01.150 ], 00:16:01.150 "product_name": "Malloc disk", 00:16:01.150 "block_size": 512, 00:16:01.150 "num_blocks": 65536, 00:16:01.150 "uuid": "e1f1ad22-5267-4107-8ec0-f525fa08c889", 00:16:01.150 "assigned_rate_limits": { 00:16:01.150 "rw_ios_per_sec": 0, 00:16:01.150 "rw_mbytes_per_sec": 0, 00:16:01.150 "r_mbytes_per_sec": 0, 00:16:01.150 "w_mbytes_per_sec": 0 00:16:01.150 }, 00:16:01.150 "claimed": true, 00:16:01.150 "claim_type": "exclusive_write", 00:16:01.150 "zoned": false, 00:16:01.150 "supported_io_types": { 00:16:01.150 "read": true, 00:16:01.150 "write": true, 00:16:01.150 "unmap": true, 00:16:01.150 "write_zeroes": true, 00:16:01.150 "flush": true, 00:16:01.150 "reset": true, 00:16:01.150 "compare": false, 00:16:01.150 "compare_and_write": false, 00:16:01.150 "abort": true, 00:16:01.150 "nvme_admin": false, 00:16:01.150 "nvme_io": false 00:16:01.150 }, 00:16:01.150 "memory_domains": [ 00:16:01.150 { 00:16:01.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.150 "dma_device_type": 2 00:16:01.150 } 00:16:01.150 ], 00:16:01.150 "driver_specific": {} 00:16:01.150 } 00:16:01.150 ] 00:16:01.150 20:58:29 -- common/autotest_common.sh@895 -- # return 0 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.150 20:58:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.150 "name": "Existed_Raid", 00:16:01.151 "uuid": "788e6e70-2404-4ab6-a26f-5b5af4601440", 00:16:01.151 "strip_size_kb": 64, 00:16:01.151 "state": "configuring", 00:16:01.151 "raid_level": "concat", 00:16:01.151 "superblock": true, 00:16:01.151 "num_base_bdevs": 3, 00:16:01.151 "num_base_bdevs_discovered": 1, 00:16:01.151 "num_base_bdevs_operational": 3, 00:16:01.151 "base_bdevs_list": [ 00:16:01.151 { 00:16:01.151 "name": "BaseBdev1", 00:16:01.151 "uuid": "e1f1ad22-5267-4107-8ec0-f525fa08c889", 00:16:01.151 "is_configured": true, 00:16:01.151 "data_offset": 2048, 00:16:01.151 "data_size": 63488 00:16:01.151 }, 00:16:01.151 { 00:16:01.151 "name": "BaseBdev2", 00:16:01.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.151 "is_configured": false, 00:16:01.151 "data_offset": 0, 00:16:01.151 "data_size": 0 00:16:01.151 }, 00:16:01.151 { 00:16:01.151 "name": "BaseBdev3", 00:16:01.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.151 "is_configured": false, 00:16:01.151 "data_offset": 0, 00:16:01.151 "data_size": 0 00:16:01.151 } 00:16:01.151 ] 00:16:01.151 }' 00:16:01.151 20:58:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.151 20:58:29 -- common/autotest_common.sh@10 -- # set +x 00:16:02.088 20:58:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.088 [2024-06-09 20:58:30.076113] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.088 [2024-06-09 20:58:30.076228] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:02.088 20:58:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:02.088 20:58:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:02.347 20:58:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.606 BaseBdev1 00:16:02.606 20:58:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:02.606 20:58:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:02.606 20:58:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:02.606 20:58:30 -- common/autotest_common.sh@889 -- # local i 00:16:02.606 20:58:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:02.606 20:58:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:02.606 20:58:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.865 20:58:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.865 [ 00:16:02.865 { 00:16:02.865 "name": "BaseBdev1", 00:16:02.865 "aliases": [ 00:16:02.865 "38fe1766-585b-43d7-bc8c-7a325a547461" 00:16:02.865 ], 00:16:02.865 "product_name": "Malloc disk", 00:16:02.865 "block_size": 512, 00:16:02.865 "num_blocks": 65536, 00:16:02.865 "uuid": "38fe1766-585b-43d7-bc8c-7a325a547461", 00:16:02.865 "assigned_rate_limits": { 00:16:02.865 "rw_ios_per_sec": 0, 00:16:02.865 "rw_mbytes_per_sec": 0, 00:16:02.865 "r_mbytes_per_sec": 0, 00:16:02.865 "w_mbytes_per_sec": 0 00:16:02.865 }, 00:16:02.865 "claimed": false, 00:16:02.865 "zoned": false, 00:16:02.865 "supported_io_types": { 00:16:02.865 "read": true, 00:16:02.865 "write": true, 00:16:02.865 "unmap": true, 00:16:02.865 "write_zeroes": true, 00:16:02.865 "flush": true, 00:16:02.865 "reset": true, 00:16:02.865 "compare": false, 00:16:02.865 "compare_and_write": false, 00:16:02.865 "abort": true, 00:16:02.865 "nvme_admin": false, 00:16:02.865 "nvme_io": false 00:16:02.865 }, 00:16:02.865 "memory_domains": [ 00:16:02.865 { 00:16:02.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.865 "dma_device_type": 2 00:16:02.865 } 00:16:02.865 ], 00:16:02.865 "driver_specific": {} 00:16:02.865 } 00:16:02.865 ] 00:16:02.865 20:58:31 -- common/autotest_common.sh@895 -- # return 0 00:16:02.865 20:58:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:03.124 [2024-06-09 20:58:31.206760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.124 [2024-06-09 20:58:31.208787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.124 [2024-06-09 20:58:31.208866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.124 [2024-06-09 20:58:31.208896] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.124 [2024-06-09 20:58:31.208923] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.124 20:58:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.383 20:58:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.383 "name": "Existed_Raid", 00:16:03.383 "uuid": "4f5108e5-c4b3-4f70-84fa-6065bc312b22", 00:16:03.383 "strip_size_kb": 64, 00:16:03.383 "state": "configuring", 00:16:03.383 "raid_level": "concat", 00:16:03.383 "superblock": true, 00:16:03.383 "num_base_bdevs": 3, 00:16:03.383 "num_base_bdevs_discovered": 1, 00:16:03.383 "num_base_bdevs_operational": 3, 00:16:03.383 "base_bdevs_list": [ 00:16:03.383 { 00:16:03.383 "name": "BaseBdev1", 00:16:03.383 "uuid": "38fe1766-585b-43d7-bc8c-7a325a547461", 00:16:03.383 "is_configured": true, 00:16:03.383 "data_offset": 2048, 00:16:03.383 "data_size": 63488 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "name": "BaseBdev2", 00:16:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.383 "is_configured": false, 00:16:03.383 "data_offset": 0, 00:16:03.383 "data_size": 0 00:16:03.383 }, 00:16:03.383 { 00:16:03.383 "name": "BaseBdev3", 00:16:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.383 "is_configured": false, 00:16:03.383 "data_offset": 0, 00:16:03.383 "data_size": 0 00:16:03.383 } 00:16:03.383 ] 00:16:03.383 }' 00:16:03.383 20:58:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.383 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:03.951 20:58:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.209 [2024-06-09 20:58:32.311107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.209 BaseBdev2 00:16:04.209 20:58:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:04.209 20:58:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:04.209 20:58:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:04.209 20:58:32 -- common/autotest_common.sh@889 -- # local i 00:16:04.209 20:58:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:04.209 20:58:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:04.209 20:58:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.466 20:58:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.724 [ 00:16:04.724 { 00:16:04.724 "name": "BaseBdev2", 00:16:04.724 "aliases": [ 00:16:04.724 "c72ac04d-ea5c-44a5-9d19-c437030e4812" 00:16:04.724 ], 00:16:04.724 "product_name": "Malloc disk", 00:16:04.724 "block_size": 512, 00:16:04.724 "num_blocks": 65536, 00:16:04.724 "uuid": "c72ac04d-ea5c-44a5-9d19-c437030e4812", 00:16:04.724 "assigned_rate_limits": { 00:16:04.724 "rw_ios_per_sec": 0, 00:16:04.724 "rw_mbytes_per_sec": 0, 00:16:04.724 "r_mbytes_per_sec": 0, 00:16:04.724 "w_mbytes_per_sec": 0 00:16:04.724 }, 00:16:04.724 "claimed": true, 00:16:04.724 "claim_type": "exclusive_write", 00:16:04.724 "zoned": false, 00:16:04.724 "supported_io_types": { 00:16:04.724 "read": true, 00:16:04.724 "write": true, 00:16:04.724 "unmap": true, 00:16:04.724 "write_zeroes": true, 00:16:04.724 "flush": true, 00:16:04.724 "reset": true, 00:16:04.724 "compare": false, 00:16:04.724 "compare_and_write": false, 00:16:04.724 "abort": true, 00:16:04.724 "nvme_admin": false, 00:16:04.724 "nvme_io": false 00:16:04.724 }, 00:16:04.724 "memory_domains": [ 00:16:04.724 { 00:16:04.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.724 "dma_device_type": 2 00:16:04.724 } 00:16:04.724 ], 00:16:04.724 "driver_specific": {} 00:16:04.724 } 00:16:04.724 ] 00:16:04.724 20:58:32 -- common/autotest_common.sh@895 -- # return 0 00:16:04.724 20:58:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:04.724 20:58:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:04.724 20:58:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:04.724 20:58:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.724 20:58:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.724 20:58:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.725 20:58:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.984 20:58:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.984 "name": "Existed_Raid", 00:16:04.984 "uuid": "4f5108e5-c4b3-4f70-84fa-6065bc312b22", 00:16:04.984 "strip_size_kb": 64, 00:16:04.984 "state": "configuring", 00:16:04.984 "raid_level": "concat", 00:16:04.984 "superblock": true, 00:16:04.984 "num_base_bdevs": 3, 00:16:04.984 "num_base_bdevs_discovered": 2, 00:16:04.984 "num_base_bdevs_operational": 3, 00:16:04.984 "base_bdevs_list": [ 00:16:04.984 { 00:16:04.984 "name": "BaseBdev1", 00:16:04.984 "uuid": "38fe1766-585b-43d7-bc8c-7a325a547461", 00:16:04.984 "is_configured": true, 00:16:04.984 "data_offset": 2048, 00:16:04.984 "data_size": 63488 00:16:04.984 }, 00:16:04.984 { 00:16:04.984 "name": "BaseBdev2", 00:16:04.984 "uuid": "c72ac04d-ea5c-44a5-9d19-c437030e4812", 00:16:04.984 "is_configured": true, 00:16:04.984 "data_offset": 2048, 00:16:04.984 "data_size": 63488 00:16:04.984 }, 00:16:04.984 { 00:16:04.984 "name": "BaseBdev3", 00:16:04.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.984 "is_configured": false, 00:16:04.984 "data_offset": 0, 00:16:04.984 "data_size": 0 00:16:04.984 } 00:16:04.984 ] 00:16:04.984 }' 00:16:04.984 20:58:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.984 20:58:33 -- common/autotest_common.sh@10 -- # set +x 00:16:05.549 20:58:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.808 [2024-06-09 20:58:33.859744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.808 [2024-06-09 20:58:33.859979] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:05.808 [2024-06-09 20:58:33.859994] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:05.808 [2024-06-09 20:58:33.860135] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:05.808 [2024-06-09 20:58:33.860522] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:05.808 [2024-06-09 20:58:33.860551] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:05.808 [2024-06-09 20:58:33.860699] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.808 BaseBdev3 00:16:05.808 20:58:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:05.808 20:58:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:05.809 20:58:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:05.809 20:58:33 -- common/autotest_common.sh@889 -- # local i 00:16:05.809 20:58:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:05.809 20:58:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:05.809 20:58:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.067 20:58:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:06.325 [ 00:16:06.325 { 00:16:06.325 "name": "BaseBdev3", 00:16:06.325 "aliases": [ 00:16:06.325 "df8fd98c-39cd-470c-a086-c19ec5a4bd5b" 00:16:06.325 ], 00:16:06.325 "product_name": "Malloc disk", 00:16:06.325 "block_size": 512, 00:16:06.325 "num_blocks": 65536, 00:16:06.325 "uuid": "df8fd98c-39cd-470c-a086-c19ec5a4bd5b", 00:16:06.325 "assigned_rate_limits": { 00:16:06.325 "rw_ios_per_sec": 0, 00:16:06.325 "rw_mbytes_per_sec": 0, 00:16:06.325 "r_mbytes_per_sec": 0, 00:16:06.325 "w_mbytes_per_sec": 0 00:16:06.326 }, 00:16:06.326 "claimed": true, 00:16:06.326 "claim_type": "exclusive_write", 00:16:06.326 "zoned": false, 00:16:06.326 "supported_io_types": { 00:16:06.326 "read": true, 00:16:06.326 "write": true, 00:16:06.326 "unmap": true, 00:16:06.326 "write_zeroes": true, 00:16:06.326 "flush": true, 00:16:06.326 "reset": true, 00:16:06.326 "compare": false, 00:16:06.326 "compare_and_write": false, 00:16:06.326 "abort": true, 00:16:06.326 "nvme_admin": false, 00:16:06.326 "nvme_io": false 00:16:06.326 }, 00:16:06.326 "memory_domains": [ 00:16:06.326 { 00:16:06.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.326 "dma_device_type": 2 00:16:06.326 } 00:16:06.326 ], 00:16:06.326 "driver_specific": {} 00:16:06.326 } 00:16:06.326 ] 00:16:06.326 20:58:34 -- common/autotest_common.sh@895 -- # return 0 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.326 20:58:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.584 20:58:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.584 "name": "Existed_Raid", 00:16:06.584 "uuid": "4f5108e5-c4b3-4f70-84fa-6065bc312b22", 00:16:06.584 "strip_size_kb": 64, 00:16:06.584 "state": "online", 00:16:06.584 "raid_level": "concat", 00:16:06.584 "superblock": true, 00:16:06.584 "num_base_bdevs": 3, 00:16:06.584 "num_base_bdevs_discovered": 3, 00:16:06.584 "num_base_bdevs_operational": 3, 00:16:06.584 "base_bdevs_list": [ 00:16:06.584 { 00:16:06.584 "name": "BaseBdev1", 00:16:06.584 "uuid": "38fe1766-585b-43d7-bc8c-7a325a547461", 00:16:06.584 "is_configured": true, 00:16:06.584 "data_offset": 2048, 00:16:06.584 "data_size": 63488 00:16:06.584 }, 00:16:06.584 { 00:16:06.584 "name": "BaseBdev2", 00:16:06.584 "uuid": "c72ac04d-ea5c-44a5-9d19-c437030e4812", 00:16:06.584 "is_configured": true, 00:16:06.584 "data_offset": 2048, 00:16:06.584 "data_size": 63488 00:16:06.584 }, 00:16:06.584 { 00:16:06.584 "name": "BaseBdev3", 00:16:06.584 "uuid": "df8fd98c-39cd-470c-a086-c19ec5a4bd5b", 00:16:06.584 "is_configured": true, 00:16:06.584 "data_offset": 2048, 00:16:06.584 "data_size": 63488 00:16:06.584 } 00:16:06.584 ] 00:16:06.584 }' 00:16:06.584 20:58:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.584 20:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:07.150 20:58:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:07.408 [2024-06-09 20:58:35.352152] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.408 [2024-06-09 20:58:35.352187] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.408 [2024-06-09 20:58:35.352249] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.408 20:58:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.666 20:58:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.666 "name": "Existed_Raid", 00:16:07.666 "uuid": "4f5108e5-c4b3-4f70-84fa-6065bc312b22", 00:16:07.667 "strip_size_kb": 64, 00:16:07.667 "state": "offline", 00:16:07.667 "raid_level": "concat", 00:16:07.667 "superblock": true, 00:16:07.667 "num_base_bdevs": 3, 00:16:07.667 "num_base_bdevs_discovered": 2, 00:16:07.667 "num_base_bdevs_operational": 2, 00:16:07.667 "base_bdevs_list": [ 00:16:07.667 { 00:16:07.667 "name": null, 00:16:07.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.667 "is_configured": false, 00:16:07.667 "data_offset": 2048, 00:16:07.667 "data_size": 63488 00:16:07.667 }, 00:16:07.667 { 00:16:07.667 "name": "BaseBdev2", 00:16:07.667 "uuid": "c72ac04d-ea5c-44a5-9d19-c437030e4812", 00:16:07.667 "is_configured": true, 00:16:07.667 "data_offset": 2048, 00:16:07.667 "data_size": 63488 00:16:07.667 }, 00:16:07.667 { 00:16:07.667 "name": "BaseBdev3", 00:16:07.667 "uuid": "df8fd98c-39cd-470c-a086-c19ec5a4bd5b", 00:16:07.667 "is_configured": true, 00:16:07.667 "data_offset": 2048, 00:16:07.667 "data_size": 63488 00:16:07.667 } 00:16:07.667 ] 00:16:07.667 }' 00:16:07.667 20:58:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.667 20:58:35 -- common/autotest_common.sh@10 -- # set +x 00:16:08.233 20:58:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:08.233 20:58:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:08.233 20:58:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.233 20:58:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:08.491 20:58:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:08.491 20:58:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.491 20:58:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:08.749 [2024-06-09 20:58:36.739715] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.749 20:58:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:08.749 20:58:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:08.749 20:58:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.749 20:58:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:09.007 20:58:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:09.007 20:58:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.007 20:58:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:09.265 [2024-06-09 20:58:37.246059] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.265 [2024-06-09 20:58:37.246121] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:09.265 20:58:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:09.265 20:58:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:09.265 20:58:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.265 20:58:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.523 20:58:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:09.523 20:58:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:09.523 20:58:37 -- bdev/bdev_raid.sh@287 -- # killprocess 115722 00:16:09.523 20:58:37 -- common/autotest_common.sh@926 -- # '[' -z 115722 ']' 00:16:09.523 20:58:37 -- common/autotest_common.sh@930 -- # kill -0 115722 00:16:09.523 20:58:37 -- common/autotest_common.sh@931 -- # uname 00:16:09.523 20:58:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.523 20:58:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115722 00:16:09.523 20:58:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.523 20:58:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.523 20:58:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115722' 00:16:09.523 killing process with pid 115722 00:16:09.523 20:58:37 -- common/autotest_common.sh@945 -- # kill 115722 00:16:09.523 [2024-06-09 20:58:37.578039] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.523 [2024-06-09 20:58:37.578150] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:09.523 20:58:37 -- common/autotest_common.sh@950 -- # wait 115722 00:16:10.472 ************************************ 00:16:10.472 END TEST raid_state_function_test_sb 00:16:10.472 ************************************ 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:10.472 00:16:10.472 real 0m12.626s 00:16:10.472 user 0m22.272s 00:16:10.472 sys 0m1.533s 00:16:10.472 20:58:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.472 20:58:38 -- common/autotest_common.sh@10 -- # set +x 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:10.472 20:58:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:10.472 20:58:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.472 20:58:38 -- common/autotest_common.sh@10 -- # set +x 00:16:10.472 ************************************ 00:16:10.472 START TEST raid_superblock_test 00:16:10.472 ************************************ 00:16:10.472 20:58:38 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=116113 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116113 /var/tmp/spdk-raid.sock 00:16:10.472 20:58:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:10.472 20:58:38 -- common/autotest_common.sh@819 -- # '[' -z 116113 ']' 00:16:10.472 20:58:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:10.472 20:58:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.472 20:58:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:10.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:10.472 20:58:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.472 20:58:38 -- common/autotest_common.sh@10 -- # set +x 00:16:10.745 [2024-06-09 20:58:38.646503] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:10.745 [2024-06-09 20:58:38.646709] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116113 ] 00:16:10.745 [2024-06-09 20:58:38.815464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.003 [2024-06-09 20:58:39.003834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.003 [2024-06-09 20:58:39.174001] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.569 20:58:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.569 20:58:39 -- common/autotest_common.sh@852 -- # return 0 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:11.569 20:58:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:11.827 malloc1 00:16:11.827 20:58:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:12.085 [2024-06-09 20:58:40.005667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:12.085 [2024-06-09 20:58:40.005794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.085 [2024-06-09 20:58:40.005826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:12.085 [2024-06-09 20:58:40.005874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.085 [2024-06-09 20:58:40.008298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.085 [2024-06-09 20:58:40.008364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:12.085 pt1 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:12.085 20:58:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:12.344 malloc2 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:12.344 [2024-06-09 20:58:40.479577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:12.344 [2024-06-09 20:58:40.479680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.344 [2024-06-09 20:58:40.479722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:12.344 [2024-06-09 20:58:40.479774] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.344 [2024-06-09 20:58:40.482112] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.344 [2024-06-09 20:58:40.482176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:12.344 pt2 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:12.344 20:58:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:12.603 malloc3 00:16:12.603 20:58:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:12.861 [2024-06-09 20:58:40.892761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:12.861 [2024-06-09 20:58:40.892876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.861 [2024-06-09 20:58:40.892926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:12.861 [2024-06-09 20:58:40.892985] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.861 [2024-06-09 20:58:40.895378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.861 [2024-06-09 20:58:40.895454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:12.861 pt3 00:16:12.861 20:58:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:12.861 20:58:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:12.861 20:58:40 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:13.120 [2024-06-09 20:58:41.080825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.120 [2024-06-09 20:58:41.082833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.120 [2024-06-09 20:58:41.082908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:13.120 [2024-06-09 20:58:41.083139] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:13.120 [2024-06-09 20:58:41.083170] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:13.120 [2024-06-09 20:58:41.083289] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:13.120 [2024-06-09 20:58:41.083654] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:13.120 [2024-06-09 20:58:41.083693] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:13.120 [2024-06-09 20:58:41.083898] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.120 20:58:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.379 20:58:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.379 "name": "raid_bdev1", 00:16:13.379 "uuid": "049d22fd-5d28-4e12-a86c-910704bfae43", 00:16:13.379 "strip_size_kb": 64, 00:16:13.379 "state": "online", 00:16:13.379 "raid_level": "concat", 00:16:13.379 "superblock": true, 00:16:13.379 "num_base_bdevs": 3, 00:16:13.379 "num_base_bdevs_discovered": 3, 00:16:13.379 "num_base_bdevs_operational": 3, 00:16:13.379 "base_bdevs_list": [ 00:16:13.379 { 00:16:13.379 "name": "pt1", 00:16:13.379 "uuid": "6c1075ac-a75f-5cc3-a07b-902e974b366b", 00:16:13.379 "is_configured": true, 00:16:13.379 "data_offset": 2048, 00:16:13.379 "data_size": 63488 00:16:13.379 }, 00:16:13.379 { 00:16:13.379 "name": "pt2", 00:16:13.379 "uuid": "0bf4d2e3-f351-5000-ae71-ca847baaf1ec", 00:16:13.379 "is_configured": true, 00:16:13.379 "data_offset": 2048, 00:16:13.379 "data_size": 63488 00:16:13.379 }, 00:16:13.379 { 00:16:13.379 "name": "pt3", 00:16:13.379 "uuid": "38521abc-5715-5bc0-bd22-96ff6c1caddf", 00:16:13.379 "is_configured": true, 00:16:13.379 "data_offset": 2048, 00:16:13.379 "data_size": 63488 00:16:13.379 } 00:16:13.379 ] 00:16:13.379 }' 00:16:13.379 20:58:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.379 20:58:41 -- common/autotest_common.sh@10 -- # set +x 00:16:13.947 20:58:41 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:13.947 20:58:41 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:14.205 [2024-06-09 20:58:42.149198] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.205 20:58:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=049d22fd-5d28-4e12-a86c-910704bfae43 00:16:14.205 20:58:42 -- bdev/bdev_raid.sh@380 -- # '[' -z 049d22fd-5d28-4e12-a86c-910704bfae43 ']' 00:16:14.205 20:58:42 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:14.462 [2024-06-09 20:58:42.392979] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.462 [2024-06-09 20:58:42.393005] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.463 [2024-06-09 20:58:42.393086] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.463 [2024-06-09 20:58:42.393152] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.463 [2024-06-09 20:58:42.393163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:14.463 20:58:42 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.463 20:58:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:14.720 20:58:42 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:14.720 20:58:42 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:14.720 20:58:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.720 20:58:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:14.720 20:58:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.720 20:58:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:14.979 20:58:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.979 20:58:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:15.237 20:58:43 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:15.237 20:58:43 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:15.496 20:58:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:15.496 20:58:43 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:15.496 20:58:43 -- common/autotest_common.sh@640 -- # local es=0 00:16:15.496 20:58:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:15.496 20:58:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.496 20:58:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:15.496 20:58:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.496 20:58:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:15.496 20:58:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.496 20:58:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:15.496 20:58:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.496 20:58:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:15.496 20:58:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:15.755 [2024-06-09 20:58:43.777215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:15.755 [2024-06-09 20:58:43.779343] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:15.755 [2024-06-09 20:58:43.779433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:15.755 [2024-06-09 20:58:43.779504] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:15.755 [2024-06-09 20:58:43.779604] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:15.755 [2024-06-09 20:58:43.779699] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:15.755 [2024-06-09 20:58:43.779752] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.755 [2024-06-09 20:58:43.779765] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:16:15.755 request: 00:16:15.755 { 00:16:15.755 "name": "raid_bdev1", 00:16:15.755 "raid_level": "concat", 00:16:15.755 "base_bdevs": [ 00:16:15.755 "malloc1", 00:16:15.755 "malloc2", 00:16:15.755 "malloc3" 00:16:15.755 ], 00:16:15.755 "superblock": false, 00:16:15.755 "strip_size_kb": 64, 00:16:15.755 "method": "bdev_raid_create", 00:16:15.755 "req_id": 1 00:16:15.755 } 00:16:15.755 Got JSON-RPC error response 00:16:15.755 response: 00:16:15.755 { 00:16:15.755 "code": -17, 00:16:15.755 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:15.755 } 00:16:15.755 20:58:43 -- common/autotest_common.sh@643 -- # es=1 00:16:15.755 20:58:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:15.755 20:58:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:15.755 20:58:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:15.755 20:58:43 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.755 20:58:43 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:16.014 20:58:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:16.014 20:58:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:16.014 20:58:43 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.273 [2024-06-09 20:58:44.245249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.273 [2024-06-09 20:58:44.245328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.273 [2024-06-09 20:58:44.245366] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:16.273 [2024-06-09 20:58:44.245387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.273 [2024-06-09 20:58:44.247713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.273 [2024-06-09 20:58:44.247782] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.273 [2024-06-09 20:58:44.247922] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:16.273 [2024-06-09 20:58:44.247983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.273 pt1 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.273 20:58:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.532 20:58:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.532 "name": "raid_bdev1", 00:16:16.532 "uuid": "049d22fd-5d28-4e12-a86c-910704bfae43", 00:16:16.532 "strip_size_kb": 64, 00:16:16.532 "state": "configuring", 00:16:16.532 "raid_level": "concat", 00:16:16.532 "superblock": true, 00:16:16.532 "num_base_bdevs": 3, 00:16:16.532 "num_base_bdevs_discovered": 1, 00:16:16.532 "num_base_bdevs_operational": 3, 00:16:16.532 "base_bdevs_list": [ 00:16:16.532 { 00:16:16.532 "name": "pt1", 00:16:16.532 "uuid": "6c1075ac-a75f-5cc3-a07b-902e974b366b", 00:16:16.532 "is_configured": true, 00:16:16.532 "data_offset": 2048, 00:16:16.532 "data_size": 63488 00:16:16.532 }, 00:16:16.532 { 00:16:16.532 "name": null, 00:16:16.532 "uuid": "0bf4d2e3-f351-5000-ae71-ca847baaf1ec", 00:16:16.532 "is_configured": false, 00:16:16.532 "data_offset": 2048, 00:16:16.532 "data_size": 63488 00:16:16.532 }, 00:16:16.532 { 00:16:16.532 "name": null, 00:16:16.532 "uuid": "38521abc-5715-5bc0-bd22-96ff6c1caddf", 00:16:16.532 "is_configured": false, 00:16:16.532 "data_offset": 2048, 00:16:16.532 "data_size": 63488 00:16:16.532 } 00:16:16.532 ] 00:16:16.532 }' 00:16:16.532 20:58:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.532 20:58:44 -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 20:58:45 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:17.100 20:58:45 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.359 [2024-06-09 20:58:45.329527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.359 [2024-06-09 20:58:45.329627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.359 [2024-06-09 20:58:45.329675] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:17.359 [2024-06-09 20:58:45.329712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.359 [2024-06-09 20:58:45.330249] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.359 [2024-06-09 20:58:45.330306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.359 [2024-06-09 20:58:45.330461] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:17.359 [2024-06-09 20:58:45.330516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.359 pt2 00:16:17.359 20:58:45 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:17.618 [2024-06-09 20:58:45.569625] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.618 "name": "raid_bdev1", 00:16:17.618 "uuid": "049d22fd-5d28-4e12-a86c-910704bfae43", 00:16:17.618 "strip_size_kb": 64, 00:16:17.618 "state": "configuring", 00:16:17.618 "raid_level": "concat", 00:16:17.618 "superblock": true, 00:16:17.618 "num_base_bdevs": 3, 00:16:17.618 "num_base_bdevs_discovered": 1, 00:16:17.618 "num_base_bdevs_operational": 3, 00:16:17.618 "base_bdevs_list": [ 00:16:17.618 { 00:16:17.618 "name": "pt1", 00:16:17.618 "uuid": "6c1075ac-a75f-5cc3-a07b-902e974b366b", 00:16:17.618 "is_configured": true, 00:16:17.618 "data_offset": 2048, 00:16:17.618 "data_size": 63488 00:16:17.618 }, 00:16:17.618 { 00:16:17.618 "name": null, 00:16:17.618 "uuid": "0bf4d2e3-f351-5000-ae71-ca847baaf1ec", 00:16:17.618 "is_configured": false, 00:16:17.618 "data_offset": 2048, 00:16:17.618 "data_size": 63488 00:16:17.618 }, 00:16:17.618 { 00:16:17.618 "name": null, 00:16:17.618 "uuid": "38521abc-5715-5bc0-bd22-96ff6c1caddf", 00:16:17.618 "is_configured": false, 00:16:17.618 "data_offset": 2048, 00:16:17.618 "data_size": 63488 00:16:17.618 } 00:16:17.618 ] 00:16:17.618 }' 00:16:17.618 20:58:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.618 20:58:45 -- common/autotest_common.sh@10 -- # set +x 00:16:18.554 20:58:46 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:18.554 20:58:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.554 20:58:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.554 [2024-06-09 20:58:46.585796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.554 [2024-06-09 20:58:46.585898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.554 [2024-06-09 20:58:46.585937] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:18.554 [2024-06-09 20:58:46.585963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.554 [2024-06-09 20:58:46.586455] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.554 [2024-06-09 20:58:46.586509] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.554 [2024-06-09 20:58:46.586624] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:18.554 [2024-06-09 20:58:46.586648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.554 pt2 00:16:18.554 20:58:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:18.554 20:58:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.554 20:58:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:18.812 [2024-06-09 20:58:46.777855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:18.812 [2024-06-09 20:58:46.777940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.812 [2024-06-09 20:58:46.777982] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:18.812 [2024-06-09 20:58:46.778007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.812 [2024-06-09 20:58:46.778477] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.812 [2024-06-09 20:58:46.778523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:18.812 [2024-06-09 20:58:46.778646] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:18.812 [2024-06-09 20:58:46.778670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:18.812 [2024-06-09 20:58:46.778831] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:18.812 [2024-06-09 20:58:46.778852] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.812 [2024-06-09 20:58:46.778953] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:18.812 [2024-06-09 20:58:46.779290] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:18.812 [2024-06-09 20:58:46.779312] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:18.812 [2024-06-09 20:58:46.779440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.812 pt3 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.813 20:58:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.071 20:58:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.071 "name": "raid_bdev1", 00:16:19.071 "uuid": "049d22fd-5d28-4e12-a86c-910704bfae43", 00:16:19.071 "strip_size_kb": 64, 00:16:19.071 "state": "online", 00:16:19.071 "raid_level": "concat", 00:16:19.071 "superblock": true, 00:16:19.071 "num_base_bdevs": 3, 00:16:19.071 "num_base_bdevs_discovered": 3, 00:16:19.071 "num_base_bdevs_operational": 3, 00:16:19.071 "base_bdevs_list": [ 00:16:19.071 { 00:16:19.071 "name": "pt1", 00:16:19.071 "uuid": "6c1075ac-a75f-5cc3-a07b-902e974b366b", 00:16:19.071 "is_configured": true, 00:16:19.071 "data_offset": 2048, 00:16:19.071 "data_size": 63488 00:16:19.071 }, 00:16:19.071 { 00:16:19.071 "name": "pt2", 00:16:19.072 "uuid": "0bf4d2e3-f351-5000-ae71-ca847baaf1ec", 00:16:19.072 "is_configured": true, 00:16:19.072 "data_offset": 2048, 00:16:19.072 "data_size": 63488 00:16:19.072 }, 00:16:19.072 { 00:16:19.072 "name": "pt3", 00:16:19.072 "uuid": "38521abc-5715-5bc0-bd22-96ff6c1caddf", 00:16:19.072 "is_configured": true, 00:16:19.072 "data_offset": 2048, 00:16:19.072 "data_size": 63488 00:16:19.072 } 00:16:19.072 ] 00:16:19.072 }' 00:16:19.072 20:58:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.072 20:58:47 -- common/autotest_common.sh@10 -- # set +x 00:16:19.639 20:58:47 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.639 20:58:47 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:19.921 [2024-06-09 20:58:47.870402] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.921 20:58:47 -- bdev/bdev_raid.sh@430 -- # '[' 049d22fd-5d28-4e12-a86c-910704bfae43 '!=' 049d22fd-5d28-4e12-a86c-910704bfae43 ']' 00:16:19.921 20:58:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:19.921 20:58:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:19.921 20:58:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:19.921 20:58:47 -- bdev/bdev_raid.sh@511 -- # killprocess 116113 00:16:19.921 20:58:47 -- common/autotest_common.sh@926 -- # '[' -z 116113 ']' 00:16:19.921 20:58:47 -- common/autotest_common.sh@930 -- # kill -0 116113 00:16:19.921 20:58:47 -- common/autotest_common.sh@931 -- # uname 00:16:19.921 20:58:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:19.921 20:58:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116113 00:16:19.921 killing process with pid 116113 00:16:19.921 20:58:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:19.921 20:58:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:19.921 20:58:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116113' 00:16:19.921 20:58:47 -- common/autotest_common.sh@945 -- # kill 116113 00:16:19.921 [2024-06-09 20:58:47.914276] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.921 20:58:47 -- common/autotest_common.sh@950 -- # wait 116113 00:16:19.921 [2024-06-09 20:58:47.914341] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.921 [2024-06-09 20:58:47.914398] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.921 [2024-06-09 20:58:47.914408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:20.185 [2024-06-09 20:58:48.114480] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.121 ************************************ 00:16:21.121 END TEST raid_superblock_test 00:16:21.121 ************************************ 00:16:21.121 20:58:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:21.121 00:16:21.121 real 0m10.488s 00:16:21.121 user 0m18.166s 00:16:21.121 sys 0m1.304s 00:16:21.121 20:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.121 20:58:49 -- common/autotest_common.sh@10 -- # set +x 00:16:21.121 20:58:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:21.121 20:58:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:21.121 20:58:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:21.121 20:58:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:21.121 20:58:49 -- common/autotest_common.sh@10 -- # set +x 00:16:21.121 ************************************ 00:16:21.122 START TEST raid_state_function_test 00:16:21.122 ************************************ 00:16:21.122 20:58:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=116424 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116424' 00:16:21.122 Process raid pid: 116424 00:16:21.122 20:58:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116424 /var/tmp/spdk-raid.sock 00:16:21.122 20:58:49 -- common/autotest_common.sh@819 -- # '[' -z 116424 ']' 00:16:21.122 20:58:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:21.122 20:58:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:21.122 20:58:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:21.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:21.122 20:58:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:21.122 20:58:49 -- common/autotest_common.sh@10 -- # set +x 00:16:21.122 [2024-06-09 20:58:49.187935] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:21.122 [2024-06-09 20:58:49.188131] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.381 [2024-06-09 20:58:49.353750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.381 [2024-06-09 20:58:49.532338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.640 [2024-06-09 20:58:49.700526] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.207 20:58:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:22.207 20:58:50 -- common/autotest_common.sh@852 -- # return 0 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:22.207 [2024-06-09 20:58:50.253011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.207 [2024-06-09 20:58:50.253087] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.207 [2024-06-09 20:58:50.253100] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.207 [2024-06-09 20:58:50.253119] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.207 [2024-06-09 20:58:50.253126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.207 [2024-06-09 20:58:50.253167] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.207 20:58:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.208 20:58:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.208 20:58:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.466 20:58:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.466 "name": "Existed_Raid", 00:16:22.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.466 "strip_size_kb": 0, 00:16:22.466 "state": "configuring", 00:16:22.466 "raid_level": "raid1", 00:16:22.466 "superblock": false, 00:16:22.466 "num_base_bdevs": 3, 00:16:22.466 "num_base_bdevs_discovered": 0, 00:16:22.466 "num_base_bdevs_operational": 3, 00:16:22.466 "base_bdevs_list": [ 00:16:22.466 { 00:16:22.466 "name": "BaseBdev1", 00:16:22.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.466 "is_configured": false, 00:16:22.466 "data_offset": 0, 00:16:22.466 "data_size": 0 00:16:22.466 }, 00:16:22.466 { 00:16:22.466 "name": "BaseBdev2", 00:16:22.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.466 "is_configured": false, 00:16:22.466 "data_offset": 0, 00:16:22.466 "data_size": 0 00:16:22.466 }, 00:16:22.466 { 00:16:22.466 "name": "BaseBdev3", 00:16:22.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.466 "is_configured": false, 00:16:22.466 "data_offset": 0, 00:16:22.466 "data_size": 0 00:16:22.466 } 00:16:22.466 ] 00:16:22.466 }' 00:16:22.466 20:58:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.466 20:58:50 -- common/autotest_common.sh@10 -- # set +x 00:16:23.033 20:58:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:23.291 [2024-06-09 20:58:51.265094] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.291 [2024-06-09 20:58:51.265133] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:23.291 20:58:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:23.291 [2024-06-09 20:58:51.453120] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.291 [2024-06-09 20:58:51.453179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.291 [2024-06-09 20:58:51.453191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.291 [2024-06-09 20:58:51.453216] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.291 [2024-06-09 20:58:51.453225] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.291 [2024-06-09 20:58:51.453248] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.291 20:58:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.549 [2024-06-09 20:58:51.671647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.549 BaseBdev1 00:16:23.549 20:58:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:23.549 20:58:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:23.549 20:58:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:23.549 20:58:51 -- common/autotest_common.sh@889 -- # local i 00:16:23.549 20:58:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:23.549 20:58:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:23.549 20:58:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.807 20:58:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.065 [ 00:16:24.065 { 00:16:24.065 "name": "BaseBdev1", 00:16:24.065 "aliases": [ 00:16:24.065 "023615df-78a9-40b2-b755-575bcfbc5e02" 00:16:24.065 ], 00:16:24.065 "product_name": "Malloc disk", 00:16:24.065 "block_size": 512, 00:16:24.065 "num_blocks": 65536, 00:16:24.065 "uuid": "023615df-78a9-40b2-b755-575bcfbc5e02", 00:16:24.065 "assigned_rate_limits": { 00:16:24.065 "rw_ios_per_sec": 0, 00:16:24.065 "rw_mbytes_per_sec": 0, 00:16:24.065 "r_mbytes_per_sec": 0, 00:16:24.065 "w_mbytes_per_sec": 0 00:16:24.065 }, 00:16:24.065 "claimed": true, 00:16:24.065 "claim_type": "exclusive_write", 00:16:24.065 "zoned": false, 00:16:24.065 "supported_io_types": { 00:16:24.065 "read": true, 00:16:24.065 "write": true, 00:16:24.065 "unmap": true, 00:16:24.065 "write_zeroes": true, 00:16:24.065 "flush": true, 00:16:24.065 "reset": true, 00:16:24.065 "compare": false, 00:16:24.065 "compare_and_write": false, 00:16:24.065 "abort": true, 00:16:24.065 "nvme_admin": false, 00:16:24.065 "nvme_io": false 00:16:24.065 }, 00:16:24.065 "memory_domains": [ 00:16:24.065 { 00:16:24.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.065 "dma_device_type": 2 00:16:24.065 } 00:16:24.065 ], 00:16:24.065 "driver_specific": {} 00:16:24.065 } 00:16:24.065 ] 00:16:24.065 20:58:52 -- common/autotest_common.sh@895 -- # return 0 00:16:24.065 20:58:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.066 20:58:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.323 20:58:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.323 "name": "Existed_Raid", 00:16:24.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.323 "strip_size_kb": 0, 00:16:24.323 "state": "configuring", 00:16:24.323 "raid_level": "raid1", 00:16:24.323 "superblock": false, 00:16:24.323 "num_base_bdevs": 3, 00:16:24.323 "num_base_bdevs_discovered": 1, 00:16:24.323 "num_base_bdevs_operational": 3, 00:16:24.323 "base_bdevs_list": [ 00:16:24.323 { 00:16:24.323 "name": "BaseBdev1", 00:16:24.323 "uuid": "023615df-78a9-40b2-b755-575bcfbc5e02", 00:16:24.323 "is_configured": true, 00:16:24.323 "data_offset": 0, 00:16:24.323 "data_size": 65536 00:16:24.323 }, 00:16:24.323 { 00:16:24.323 "name": "BaseBdev2", 00:16:24.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.323 "is_configured": false, 00:16:24.323 "data_offset": 0, 00:16:24.323 "data_size": 0 00:16:24.323 }, 00:16:24.323 { 00:16:24.323 "name": "BaseBdev3", 00:16:24.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.323 "is_configured": false, 00:16:24.323 "data_offset": 0, 00:16:24.323 "data_size": 0 00:16:24.323 } 00:16:24.323 ] 00:16:24.323 }' 00:16:24.323 20:58:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.323 20:58:52 -- common/autotest_common.sh@10 -- # set +x 00:16:24.888 20:58:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.146 [2024-06-09 20:58:53.084015] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.146 [2024-06-09 20:58:53.084079] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:25.146 [2024-06-09 20:58:53.284075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.146 [2024-06-09 20:58:53.285890] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.146 [2024-06-09 20:58:53.285944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.146 [2024-06-09 20:58:53.285956] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.146 [2024-06-09 20:58:53.285981] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:25.146 20:58:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.147 20:58:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.405 20:58:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.405 "name": "Existed_Raid", 00:16:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.405 "strip_size_kb": 0, 00:16:25.405 "state": "configuring", 00:16:25.405 "raid_level": "raid1", 00:16:25.405 "superblock": false, 00:16:25.405 "num_base_bdevs": 3, 00:16:25.405 "num_base_bdevs_discovered": 1, 00:16:25.405 "num_base_bdevs_operational": 3, 00:16:25.405 "base_bdevs_list": [ 00:16:25.405 { 00:16:25.405 "name": "BaseBdev1", 00:16:25.405 "uuid": "023615df-78a9-40b2-b755-575bcfbc5e02", 00:16:25.405 "is_configured": true, 00:16:25.405 "data_offset": 0, 00:16:25.405 "data_size": 65536 00:16:25.405 }, 00:16:25.405 { 00:16:25.405 "name": "BaseBdev2", 00:16:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.405 "is_configured": false, 00:16:25.405 "data_offset": 0, 00:16:25.405 "data_size": 0 00:16:25.405 }, 00:16:25.405 { 00:16:25.405 "name": "BaseBdev3", 00:16:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.405 "is_configured": false, 00:16:25.405 "data_offset": 0, 00:16:25.405 "data_size": 0 00:16:25.405 } 00:16:25.405 ] 00:16:25.405 }' 00:16:25.405 20:58:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.405 20:58:53 -- common/autotest_common.sh@10 -- # set +x 00:16:26.339 20:58:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.339 [2024-06-09 20:58:54.476273] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.339 BaseBdev2 00:16:26.339 20:58:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:26.339 20:58:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:26.339 20:58:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:26.339 20:58:54 -- common/autotest_common.sh@889 -- # local i 00:16:26.339 20:58:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:26.339 20:58:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:26.339 20:58:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.597 20:58:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.856 [ 00:16:26.856 { 00:16:26.856 "name": "BaseBdev2", 00:16:26.856 "aliases": [ 00:16:26.856 "7b685e01-278b-45b5-9c25-685b444ebb03" 00:16:26.856 ], 00:16:26.856 "product_name": "Malloc disk", 00:16:26.856 "block_size": 512, 00:16:26.856 "num_blocks": 65536, 00:16:26.856 "uuid": "7b685e01-278b-45b5-9c25-685b444ebb03", 00:16:26.856 "assigned_rate_limits": { 00:16:26.856 "rw_ios_per_sec": 0, 00:16:26.856 "rw_mbytes_per_sec": 0, 00:16:26.856 "r_mbytes_per_sec": 0, 00:16:26.856 "w_mbytes_per_sec": 0 00:16:26.856 }, 00:16:26.856 "claimed": true, 00:16:26.856 "claim_type": "exclusive_write", 00:16:26.856 "zoned": false, 00:16:26.856 "supported_io_types": { 00:16:26.856 "read": true, 00:16:26.856 "write": true, 00:16:26.856 "unmap": true, 00:16:26.856 "write_zeroes": true, 00:16:26.856 "flush": true, 00:16:26.856 "reset": true, 00:16:26.856 "compare": false, 00:16:26.856 "compare_and_write": false, 00:16:26.856 "abort": true, 00:16:26.856 "nvme_admin": false, 00:16:26.856 "nvme_io": false 00:16:26.856 }, 00:16:26.856 "memory_domains": [ 00:16:26.856 { 00:16:26.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.856 "dma_device_type": 2 00:16:26.856 } 00:16:26.856 ], 00:16:26.856 "driver_specific": {} 00:16:26.856 } 00:16:26.856 ] 00:16:26.856 20:58:54 -- common/autotest_common.sh@895 -- # return 0 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.856 20:58:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.115 20:58:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.115 "name": "Existed_Raid", 00:16:27.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.115 "strip_size_kb": 0, 00:16:27.116 "state": "configuring", 00:16:27.116 "raid_level": "raid1", 00:16:27.116 "superblock": false, 00:16:27.116 "num_base_bdevs": 3, 00:16:27.116 "num_base_bdevs_discovered": 2, 00:16:27.116 "num_base_bdevs_operational": 3, 00:16:27.116 "base_bdevs_list": [ 00:16:27.116 { 00:16:27.116 "name": "BaseBdev1", 00:16:27.116 "uuid": "023615df-78a9-40b2-b755-575bcfbc5e02", 00:16:27.116 "is_configured": true, 00:16:27.116 "data_offset": 0, 00:16:27.116 "data_size": 65536 00:16:27.116 }, 00:16:27.116 { 00:16:27.116 "name": "BaseBdev2", 00:16:27.116 "uuid": "7b685e01-278b-45b5-9c25-685b444ebb03", 00:16:27.116 "is_configured": true, 00:16:27.116 "data_offset": 0, 00:16:27.116 "data_size": 65536 00:16:27.116 }, 00:16:27.116 { 00:16:27.116 "name": "BaseBdev3", 00:16:27.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.116 "is_configured": false, 00:16:27.116 "data_offset": 0, 00:16:27.116 "data_size": 0 00:16:27.116 } 00:16:27.116 ] 00:16:27.116 }' 00:16:27.116 20:58:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.116 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:16:27.682 20:58:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.941 [2024-06-09 20:58:56.012436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.941 [2024-06-09 20:58:56.012488] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:27.941 [2024-06-09 20:58:56.012498] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:27.941 [2024-06-09 20:58:56.012614] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:27.941 [2024-06-09 20:58:56.012945] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:27.941 [2024-06-09 20:58:56.012959] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:27.941 [2024-06-09 20:58:56.013214] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.941 BaseBdev3 00:16:27.941 20:58:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:27.941 20:58:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:27.941 20:58:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:27.941 20:58:56 -- common/autotest_common.sh@889 -- # local i 00:16:27.941 20:58:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:27.941 20:58:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:27.941 20:58:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.200 20:58:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.458 [ 00:16:28.458 { 00:16:28.458 "name": "BaseBdev3", 00:16:28.458 "aliases": [ 00:16:28.458 "6ea5b29a-63cf-428a-bb07-551da0002269" 00:16:28.458 ], 00:16:28.458 "product_name": "Malloc disk", 00:16:28.459 "block_size": 512, 00:16:28.459 "num_blocks": 65536, 00:16:28.459 "uuid": "6ea5b29a-63cf-428a-bb07-551da0002269", 00:16:28.459 "assigned_rate_limits": { 00:16:28.459 "rw_ios_per_sec": 0, 00:16:28.459 "rw_mbytes_per_sec": 0, 00:16:28.459 "r_mbytes_per_sec": 0, 00:16:28.459 "w_mbytes_per_sec": 0 00:16:28.459 }, 00:16:28.459 "claimed": true, 00:16:28.459 "claim_type": "exclusive_write", 00:16:28.459 "zoned": false, 00:16:28.459 "supported_io_types": { 00:16:28.459 "read": true, 00:16:28.459 "write": true, 00:16:28.459 "unmap": true, 00:16:28.459 "write_zeroes": true, 00:16:28.459 "flush": true, 00:16:28.459 "reset": true, 00:16:28.459 "compare": false, 00:16:28.459 "compare_and_write": false, 00:16:28.459 "abort": true, 00:16:28.459 "nvme_admin": false, 00:16:28.459 "nvme_io": false 00:16:28.459 }, 00:16:28.459 "memory_domains": [ 00:16:28.459 { 00:16:28.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.459 "dma_device_type": 2 00:16:28.459 } 00:16:28.459 ], 00:16:28.459 "driver_specific": {} 00:16:28.459 } 00:16:28.459 ] 00:16:28.459 20:58:56 -- common/autotest_common.sh@895 -- # return 0 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.459 20:58:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.717 20:58:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.717 "name": "Existed_Raid", 00:16:28.717 "uuid": "b95b1dd7-7c9d-4a47-9472-928acf1ab8b3", 00:16:28.717 "strip_size_kb": 0, 00:16:28.717 "state": "online", 00:16:28.717 "raid_level": "raid1", 00:16:28.717 "superblock": false, 00:16:28.717 "num_base_bdevs": 3, 00:16:28.717 "num_base_bdevs_discovered": 3, 00:16:28.717 "num_base_bdevs_operational": 3, 00:16:28.717 "base_bdevs_list": [ 00:16:28.717 { 00:16:28.717 "name": "BaseBdev1", 00:16:28.717 "uuid": "023615df-78a9-40b2-b755-575bcfbc5e02", 00:16:28.717 "is_configured": true, 00:16:28.717 "data_offset": 0, 00:16:28.717 "data_size": 65536 00:16:28.717 }, 00:16:28.717 { 00:16:28.717 "name": "BaseBdev2", 00:16:28.717 "uuid": "7b685e01-278b-45b5-9c25-685b444ebb03", 00:16:28.717 "is_configured": true, 00:16:28.717 "data_offset": 0, 00:16:28.717 "data_size": 65536 00:16:28.717 }, 00:16:28.717 { 00:16:28.717 "name": "BaseBdev3", 00:16:28.717 "uuid": "6ea5b29a-63cf-428a-bb07-551da0002269", 00:16:28.717 "is_configured": true, 00:16:28.717 "data_offset": 0, 00:16:28.717 "data_size": 65536 00:16:28.717 } 00:16:28.717 ] 00:16:28.717 }' 00:16:28.717 20:58:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.717 20:58:56 -- common/autotest_common.sh@10 -- # set +x 00:16:29.284 20:58:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:29.549 [2024-06-09 20:58:57.509890] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.549 20:58:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.831 20:58:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.831 "name": "Existed_Raid", 00:16:29.831 "uuid": "b95b1dd7-7c9d-4a47-9472-928acf1ab8b3", 00:16:29.831 "strip_size_kb": 0, 00:16:29.831 "state": "online", 00:16:29.831 "raid_level": "raid1", 00:16:29.831 "superblock": false, 00:16:29.831 "num_base_bdevs": 3, 00:16:29.831 "num_base_bdevs_discovered": 2, 00:16:29.831 "num_base_bdevs_operational": 2, 00:16:29.831 "base_bdevs_list": [ 00:16:29.831 { 00:16:29.831 "name": null, 00:16:29.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.831 "is_configured": false, 00:16:29.831 "data_offset": 0, 00:16:29.831 "data_size": 65536 00:16:29.831 }, 00:16:29.831 { 00:16:29.831 "name": "BaseBdev2", 00:16:29.831 "uuid": "7b685e01-278b-45b5-9c25-685b444ebb03", 00:16:29.831 "is_configured": true, 00:16:29.831 "data_offset": 0, 00:16:29.831 "data_size": 65536 00:16:29.831 }, 00:16:29.831 { 00:16:29.831 "name": "BaseBdev3", 00:16:29.831 "uuid": "6ea5b29a-63cf-428a-bb07-551da0002269", 00:16:29.831 "is_configured": true, 00:16:29.831 "data_offset": 0, 00:16:29.831 "data_size": 65536 00:16:29.831 } 00:16:29.831 ] 00:16:29.831 }' 00:16:29.831 20:58:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.831 20:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:30.411 20:58:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:30.411 20:58:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:30.411 20:58:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.411 20:58:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:30.668 20:58:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:30.668 20:58:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:30.668 20:58:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:30.926 [2024-06-09 20:58:58.876344] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:30.926 20:58:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:30.926 20:58:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:30.926 20:58:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.926 20:58:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:31.184 20:58:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:31.184 20:58:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.184 20:58:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:31.443 [2024-06-09 20:58:59.427963] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:31.443 [2024-06-09 20:58:59.427997] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.443 [2024-06-09 20:58:59.428057] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.443 [2024-06-09 20:58:59.492354] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.443 [2024-06-09 20:58:59.492392] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:31.443 20:58:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:31.443 20:58:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:31.443 20:58:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.443 20:58:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:31.702 20:58:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:31.702 20:58:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:31.702 20:58:59 -- bdev/bdev_raid.sh@287 -- # killprocess 116424 00:16:31.702 20:58:59 -- common/autotest_common.sh@926 -- # '[' -z 116424 ']' 00:16:31.702 20:58:59 -- common/autotest_common.sh@930 -- # kill -0 116424 00:16:31.702 20:58:59 -- common/autotest_common.sh@931 -- # uname 00:16:31.702 20:58:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:31.702 20:58:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116424 00:16:31.702 killing process with pid 116424 00:16:31.702 20:58:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:31.702 20:58:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:31.702 20:58:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116424' 00:16:31.702 20:58:59 -- common/autotest_common.sh@945 -- # kill 116424 00:16:31.702 [2024-06-09 20:58:59.731381] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:31.702 20:58:59 -- common/autotest_common.sh@950 -- # wait 116424 00:16:31.702 [2024-06-09 20:58:59.731488] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.637 ************************************ 00:16:32.637 END TEST raid_state_function_test 00:16:32.637 ************************************ 00:16:32.637 20:59:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:32.637 00:16:32.637 real 0m11.577s 00:16:32.637 user 0m20.428s 00:16:32.637 sys 0m1.327s 00:16:32.637 20:59:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.637 20:59:00 -- common/autotest_common.sh@10 -- # set +x 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:32.638 20:59:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:32.638 20:59:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:32.638 20:59:00 -- common/autotest_common.sh@10 -- # set +x 00:16:32.638 ************************************ 00:16:32.638 START TEST raid_state_function_test_sb 00:16:32.638 ************************************ 00:16:32.638 20:59:00 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=116794 00:16:32.638 Process raid pid: 116794 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116794' 00:16:32.638 20:59:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116794 /var/tmp/spdk-raid.sock 00:16:32.638 20:59:00 -- common/autotest_common.sh@819 -- # '[' -z 116794 ']' 00:16:32.638 20:59:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:32.638 20:59:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:32.638 20:59:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:32.638 20:59:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.638 20:59:00 -- common/autotest_common.sh@10 -- # set +x 00:16:32.896 [2024-06-09 20:59:00.829244] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:32.896 [2024-06-09 20:59:00.829449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.896 [2024-06-09 20:59:01.000761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.155 [2024-06-09 20:59:01.172258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.413 [2024-06-09 20:59:01.351552] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.671 20:59:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:33.671 20:59:01 -- common/autotest_common.sh@852 -- # return 0 00:16:33.671 20:59:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:33.928 [2024-06-09 20:59:01.933299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.928 [2024-06-09 20:59:01.933388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.928 [2024-06-09 20:59:01.933416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.928 [2024-06-09 20:59:01.933437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.928 [2024-06-09 20:59:01.933444] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.928 [2024-06-09 20:59:01.933484] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.928 20:59:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:33.928 20:59:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.929 20:59:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.186 20:59:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.186 "name": "Existed_Raid", 00:16:34.186 "uuid": "34100e51-5381-4bf1-8326-28d6207f8928", 00:16:34.186 "strip_size_kb": 0, 00:16:34.186 "state": "configuring", 00:16:34.186 "raid_level": "raid1", 00:16:34.186 "superblock": true, 00:16:34.186 "num_base_bdevs": 3, 00:16:34.186 "num_base_bdevs_discovered": 0, 00:16:34.186 "num_base_bdevs_operational": 3, 00:16:34.186 "base_bdevs_list": [ 00:16:34.186 { 00:16:34.186 "name": "BaseBdev1", 00:16:34.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.186 "is_configured": false, 00:16:34.186 "data_offset": 0, 00:16:34.186 "data_size": 0 00:16:34.186 }, 00:16:34.186 { 00:16:34.186 "name": "BaseBdev2", 00:16:34.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.186 "is_configured": false, 00:16:34.186 "data_offset": 0, 00:16:34.186 "data_size": 0 00:16:34.186 }, 00:16:34.186 { 00:16:34.186 "name": "BaseBdev3", 00:16:34.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.186 "is_configured": false, 00:16:34.186 "data_offset": 0, 00:16:34.186 "data_size": 0 00:16:34.186 } 00:16:34.186 ] 00:16:34.186 }' 00:16:34.186 20:59:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.186 20:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:34.750 20:59:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:35.007 [2024-06-09 20:59:02.973406] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.007 [2024-06-09 20:59:02.973467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:35.007 20:59:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:35.264 [2024-06-09 20:59:03.237499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.264 [2024-06-09 20:59:03.237602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.264 [2024-06-09 20:59:03.237631] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.264 [2024-06-09 20:59:03.237658] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.264 [2024-06-09 20:59:03.237666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.264 [2024-06-09 20:59:03.237692] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.264 20:59:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.522 [2024-06-09 20:59:03.483314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.522 BaseBdev1 00:16:35.522 20:59:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:35.522 20:59:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:35.522 20:59:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:35.522 20:59:03 -- common/autotest_common.sh@889 -- # local i 00:16:35.522 20:59:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:35.522 20:59:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:35.522 20:59:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.780 20:59:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.037 [ 00:16:36.037 { 00:16:36.037 "name": "BaseBdev1", 00:16:36.037 "aliases": [ 00:16:36.037 "d938f305-acf8-418a-9775-9a086c6c41d1" 00:16:36.037 ], 00:16:36.037 "product_name": "Malloc disk", 00:16:36.037 "block_size": 512, 00:16:36.037 "num_blocks": 65536, 00:16:36.038 "uuid": "d938f305-acf8-418a-9775-9a086c6c41d1", 00:16:36.038 "assigned_rate_limits": { 00:16:36.038 "rw_ios_per_sec": 0, 00:16:36.038 "rw_mbytes_per_sec": 0, 00:16:36.038 "r_mbytes_per_sec": 0, 00:16:36.038 "w_mbytes_per_sec": 0 00:16:36.038 }, 00:16:36.038 "claimed": true, 00:16:36.038 "claim_type": "exclusive_write", 00:16:36.038 "zoned": false, 00:16:36.038 "supported_io_types": { 00:16:36.038 "read": true, 00:16:36.038 "write": true, 00:16:36.038 "unmap": true, 00:16:36.038 "write_zeroes": true, 00:16:36.038 "flush": true, 00:16:36.038 "reset": true, 00:16:36.038 "compare": false, 00:16:36.038 "compare_and_write": false, 00:16:36.038 "abort": true, 00:16:36.038 "nvme_admin": false, 00:16:36.038 "nvme_io": false 00:16:36.038 }, 00:16:36.038 "memory_domains": [ 00:16:36.038 { 00:16:36.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.038 "dma_device_type": 2 00:16:36.038 } 00:16:36.038 ], 00:16:36.038 "driver_specific": {} 00:16:36.038 } 00:16:36.038 ] 00:16:36.038 20:59:03 -- common/autotest_common.sh@895 -- # return 0 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.038 20:59:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.296 20:59:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.296 "name": "Existed_Raid", 00:16:36.296 "uuid": "02194214-6720-4e11-aa48-ab0c67be0333", 00:16:36.296 "strip_size_kb": 0, 00:16:36.296 "state": "configuring", 00:16:36.296 "raid_level": "raid1", 00:16:36.296 "superblock": true, 00:16:36.296 "num_base_bdevs": 3, 00:16:36.296 "num_base_bdevs_discovered": 1, 00:16:36.296 "num_base_bdevs_operational": 3, 00:16:36.296 "base_bdevs_list": [ 00:16:36.296 { 00:16:36.296 "name": "BaseBdev1", 00:16:36.296 "uuid": "d938f305-acf8-418a-9775-9a086c6c41d1", 00:16:36.296 "is_configured": true, 00:16:36.296 "data_offset": 2048, 00:16:36.296 "data_size": 63488 00:16:36.296 }, 00:16:36.296 { 00:16:36.296 "name": "BaseBdev2", 00:16:36.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.296 "is_configured": false, 00:16:36.296 "data_offset": 0, 00:16:36.296 "data_size": 0 00:16:36.296 }, 00:16:36.296 { 00:16:36.296 "name": "BaseBdev3", 00:16:36.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.296 "is_configured": false, 00:16:36.296 "data_offset": 0, 00:16:36.296 "data_size": 0 00:16:36.296 } 00:16:36.296 ] 00:16:36.296 }' 00:16:36.296 20:59:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.296 20:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:36.862 20:59:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:36.862 [2024-06-09 20:59:04.971591] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.862 [2024-06-09 20:59:04.971661] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:36.862 20:59:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:36.862 20:59:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:37.120 20:59:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:37.378 BaseBdev1 00:16:37.378 20:59:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:37.378 20:59:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:37.378 20:59:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:37.378 20:59:05 -- common/autotest_common.sh@889 -- # local i 00:16:37.378 20:59:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:37.378 20:59:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:37.378 20:59:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.637 20:59:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:37.896 [ 00:16:37.896 { 00:16:37.896 "name": "BaseBdev1", 00:16:37.896 "aliases": [ 00:16:37.896 "042a50c2-b7f2-4e62-95cd-12e5d8e5a174" 00:16:37.896 ], 00:16:37.896 "product_name": "Malloc disk", 00:16:37.896 "block_size": 512, 00:16:37.896 "num_blocks": 65536, 00:16:37.896 "uuid": "042a50c2-b7f2-4e62-95cd-12e5d8e5a174", 00:16:37.896 "assigned_rate_limits": { 00:16:37.896 "rw_ios_per_sec": 0, 00:16:37.896 "rw_mbytes_per_sec": 0, 00:16:37.896 "r_mbytes_per_sec": 0, 00:16:37.896 "w_mbytes_per_sec": 0 00:16:37.896 }, 00:16:37.896 "claimed": false, 00:16:37.896 "zoned": false, 00:16:37.896 "supported_io_types": { 00:16:37.896 "read": true, 00:16:37.896 "write": true, 00:16:37.896 "unmap": true, 00:16:37.896 "write_zeroes": true, 00:16:37.896 "flush": true, 00:16:37.896 "reset": true, 00:16:37.896 "compare": false, 00:16:37.896 "compare_and_write": false, 00:16:37.896 "abort": true, 00:16:37.896 "nvme_admin": false, 00:16:37.896 "nvme_io": false 00:16:37.896 }, 00:16:37.896 "memory_domains": [ 00:16:37.896 { 00:16:37.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.896 "dma_device_type": 2 00:16:37.896 } 00:16:37.896 ], 00:16:37.896 "driver_specific": {} 00:16:37.896 } 00:16:37.896 ] 00:16:37.896 20:59:05 -- common/autotest_common.sh@895 -- # return 0 00:16:37.896 20:59:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:38.155 [2024-06-09 20:59:06.193859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.155 [2024-06-09 20:59:06.195570] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.155 [2024-06-09 20:59:06.195627] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.155 [2024-06-09 20:59:06.195654] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.155 [2024-06-09 20:59:06.195677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.155 20:59:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.414 20:59:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.414 "name": "Existed_Raid", 00:16:38.414 "uuid": "9a87aa6d-a985-48d2-b49a-7edcec79605b", 00:16:38.414 "strip_size_kb": 0, 00:16:38.414 "state": "configuring", 00:16:38.414 "raid_level": "raid1", 00:16:38.414 "superblock": true, 00:16:38.414 "num_base_bdevs": 3, 00:16:38.414 "num_base_bdevs_discovered": 1, 00:16:38.414 "num_base_bdevs_operational": 3, 00:16:38.414 "base_bdevs_list": [ 00:16:38.414 { 00:16:38.414 "name": "BaseBdev1", 00:16:38.414 "uuid": "042a50c2-b7f2-4e62-95cd-12e5d8e5a174", 00:16:38.414 "is_configured": true, 00:16:38.414 "data_offset": 2048, 00:16:38.414 "data_size": 63488 00:16:38.414 }, 00:16:38.414 { 00:16:38.414 "name": "BaseBdev2", 00:16:38.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.414 "is_configured": false, 00:16:38.414 "data_offset": 0, 00:16:38.414 "data_size": 0 00:16:38.414 }, 00:16:38.414 { 00:16:38.414 "name": "BaseBdev3", 00:16:38.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.414 "is_configured": false, 00:16:38.414 "data_offset": 0, 00:16:38.414 "data_size": 0 00:16:38.414 } 00:16:38.414 ] 00:16:38.414 }' 00:16:38.414 20:59:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.414 20:59:06 -- common/autotest_common.sh@10 -- # set +x 00:16:38.985 20:59:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.249 [2024-06-09 20:59:07.214588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.249 BaseBdev2 00:16:39.249 20:59:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:39.249 20:59:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:39.249 20:59:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:39.249 20:59:07 -- common/autotest_common.sh@889 -- # local i 00:16:39.249 20:59:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:39.249 20:59:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:39.249 20:59:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.507 20:59:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:39.766 [ 00:16:39.766 { 00:16:39.766 "name": "BaseBdev2", 00:16:39.766 "aliases": [ 00:16:39.766 "62e90b9d-f8d6-45a8-9c35-9f5a8979f0e7" 00:16:39.766 ], 00:16:39.766 "product_name": "Malloc disk", 00:16:39.766 "block_size": 512, 00:16:39.766 "num_blocks": 65536, 00:16:39.766 "uuid": "62e90b9d-f8d6-45a8-9c35-9f5a8979f0e7", 00:16:39.766 "assigned_rate_limits": { 00:16:39.766 "rw_ios_per_sec": 0, 00:16:39.766 "rw_mbytes_per_sec": 0, 00:16:39.766 "r_mbytes_per_sec": 0, 00:16:39.766 "w_mbytes_per_sec": 0 00:16:39.766 }, 00:16:39.766 "claimed": true, 00:16:39.766 "claim_type": "exclusive_write", 00:16:39.766 "zoned": false, 00:16:39.766 "supported_io_types": { 00:16:39.766 "read": true, 00:16:39.766 "write": true, 00:16:39.766 "unmap": true, 00:16:39.766 "write_zeroes": true, 00:16:39.766 "flush": true, 00:16:39.766 "reset": true, 00:16:39.766 "compare": false, 00:16:39.766 "compare_and_write": false, 00:16:39.766 "abort": true, 00:16:39.766 "nvme_admin": false, 00:16:39.766 "nvme_io": false 00:16:39.766 }, 00:16:39.766 "memory_domains": [ 00:16:39.766 { 00:16:39.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.766 "dma_device_type": 2 00:16:39.766 } 00:16:39.766 ], 00:16:39.766 "driver_specific": {} 00:16:39.766 } 00:16:39.766 ] 00:16:39.766 20:59:07 -- common/autotest_common.sh@895 -- # return 0 00:16:39.766 20:59:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.767 20:59:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.025 20:59:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.025 "name": "Existed_Raid", 00:16:40.025 "uuid": "9a87aa6d-a985-48d2-b49a-7edcec79605b", 00:16:40.025 "strip_size_kb": 0, 00:16:40.025 "state": "configuring", 00:16:40.025 "raid_level": "raid1", 00:16:40.025 "superblock": true, 00:16:40.025 "num_base_bdevs": 3, 00:16:40.025 "num_base_bdevs_discovered": 2, 00:16:40.025 "num_base_bdevs_operational": 3, 00:16:40.025 "base_bdevs_list": [ 00:16:40.025 { 00:16:40.025 "name": "BaseBdev1", 00:16:40.025 "uuid": "042a50c2-b7f2-4e62-95cd-12e5d8e5a174", 00:16:40.025 "is_configured": true, 00:16:40.025 "data_offset": 2048, 00:16:40.025 "data_size": 63488 00:16:40.025 }, 00:16:40.025 { 00:16:40.025 "name": "BaseBdev2", 00:16:40.025 "uuid": "62e90b9d-f8d6-45a8-9c35-9f5a8979f0e7", 00:16:40.025 "is_configured": true, 00:16:40.025 "data_offset": 2048, 00:16:40.025 "data_size": 63488 00:16:40.025 }, 00:16:40.025 { 00:16:40.025 "name": "BaseBdev3", 00:16:40.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.025 "is_configured": false, 00:16:40.025 "data_offset": 0, 00:16:40.025 "data_size": 0 00:16:40.025 } 00:16:40.025 ] 00:16:40.025 }' 00:16:40.025 20:59:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.025 20:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:40.592 20:59:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.851 [2024-06-09 20:59:08.814014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.851 [2024-06-09 20:59:08.814256] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:40.851 [2024-06-09 20:59:08.814271] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:40.851 [2024-06-09 20:59:08.814453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:40.851 BaseBdev3 00:16:40.851 [2024-06-09 20:59:08.814847] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:40.851 [2024-06-09 20:59:08.814862] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:40.851 [2024-06-09 20:59:08.815004] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.851 20:59:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:40.851 20:59:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:40.851 20:59:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:40.851 20:59:08 -- common/autotest_common.sh@889 -- # local i 00:16:40.851 20:59:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:40.851 20:59:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:40.851 20:59:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.110 20:59:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.110 [ 00:16:41.110 { 00:16:41.110 "name": "BaseBdev3", 00:16:41.110 "aliases": [ 00:16:41.110 "58aea26b-a1a6-4391-a28e-cad0cac51f5a" 00:16:41.110 ], 00:16:41.110 "product_name": "Malloc disk", 00:16:41.110 "block_size": 512, 00:16:41.110 "num_blocks": 65536, 00:16:41.110 "uuid": "58aea26b-a1a6-4391-a28e-cad0cac51f5a", 00:16:41.110 "assigned_rate_limits": { 00:16:41.110 "rw_ios_per_sec": 0, 00:16:41.110 "rw_mbytes_per_sec": 0, 00:16:41.110 "r_mbytes_per_sec": 0, 00:16:41.110 "w_mbytes_per_sec": 0 00:16:41.110 }, 00:16:41.110 "claimed": true, 00:16:41.110 "claim_type": "exclusive_write", 00:16:41.110 "zoned": false, 00:16:41.110 "supported_io_types": { 00:16:41.110 "read": true, 00:16:41.110 "write": true, 00:16:41.110 "unmap": true, 00:16:41.110 "write_zeroes": true, 00:16:41.110 "flush": true, 00:16:41.110 "reset": true, 00:16:41.110 "compare": false, 00:16:41.110 "compare_and_write": false, 00:16:41.110 "abort": true, 00:16:41.110 "nvme_admin": false, 00:16:41.110 "nvme_io": false 00:16:41.110 }, 00:16:41.110 "memory_domains": [ 00:16:41.110 { 00:16:41.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.110 "dma_device_type": 2 00:16:41.110 } 00:16:41.110 ], 00:16:41.110 "driver_specific": {} 00:16:41.110 } 00:16:41.110 ] 00:16:41.110 20:59:09 -- common/autotest_common.sh@895 -- # return 0 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.110 20:59:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.368 20:59:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.368 "name": "Existed_Raid", 00:16:41.368 "uuid": "9a87aa6d-a985-48d2-b49a-7edcec79605b", 00:16:41.368 "strip_size_kb": 0, 00:16:41.368 "state": "online", 00:16:41.368 "raid_level": "raid1", 00:16:41.368 "superblock": true, 00:16:41.368 "num_base_bdevs": 3, 00:16:41.368 "num_base_bdevs_discovered": 3, 00:16:41.368 "num_base_bdevs_operational": 3, 00:16:41.368 "base_bdevs_list": [ 00:16:41.368 { 00:16:41.368 "name": "BaseBdev1", 00:16:41.368 "uuid": "042a50c2-b7f2-4e62-95cd-12e5d8e5a174", 00:16:41.368 "is_configured": true, 00:16:41.368 "data_offset": 2048, 00:16:41.368 "data_size": 63488 00:16:41.368 }, 00:16:41.368 { 00:16:41.368 "name": "BaseBdev2", 00:16:41.368 "uuid": "62e90b9d-f8d6-45a8-9c35-9f5a8979f0e7", 00:16:41.368 "is_configured": true, 00:16:41.368 "data_offset": 2048, 00:16:41.368 "data_size": 63488 00:16:41.368 }, 00:16:41.368 { 00:16:41.368 "name": "BaseBdev3", 00:16:41.368 "uuid": "58aea26b-a1a6-4391-a28e-cad0cac51f5a", 00:16:41.368 "is_configured": true, 00:16:41.368 "data_offset": 2048, 00:16:41.368 "data_size": 63488 00:16:41.368 } 00:16:41.368 ] 00:16:41.368 }' 00:16:41.368 20:59:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.368 20:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:41.935 20:59:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:42.194 [2024-06-09 20:59:10.294011] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.453 "name": "Existed_Raid", 00:16:42.453 "uuid": "9a87aa6d-a985-48d2-b49a-7edcec79605b", 00:16:42.453 "strip_size_kb": 0, 00:16:42.453 "state": "online", 00:16:42.453 "raid_level": "raid1", 00:16:42.453 "superblock": true, 00:16:42.453 "num_base_bdevs": 3, 00:16:42.453 "num_base_bdevs_discovered": 2, 00:16:42.453 "num_base_bdevs_operational": 2, 00:16:42.453 "base_bdevs_list": [ 00:16:42.453 { 00:16:42.453 "name": null, 00:16:42.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.453 "is_configured": false, 00:16:42.453 "data_offset": 2048, 00:16:42.453 "data_size": 63488 00:16:42.453 }, 00:16:42.453 { 00:16:42.453 "name": "BaseBdev2", 00:16:42.453 "uuid": "62e90b9d-f8d6-45a8-9c35-9f5a8979f0e7", 00:16:42.453 "is_configured": true, 00:16:42.453 "data_offset": 2048, 00:16:42.453 "data_size": 63488 00:16:42.453 }, 00:16:42.453 { 00:16:42.453 "name": "BaseBdev3", 00:16:42.453 "uuid": "58aea26b-a1a6-4391-a28e-cad0cac51f5a", 00:16:42.453 "is_configured": true, 00:16:42.453 "data_offset": 2048, 00:16:42.453 "data_size": 63488 00:16:42.453 } 00:16:42.453 ] 00:16:42.453 }' 00:16:42.453 20:59:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.453 20:59:10 -- common/autotest_common.sh@10 -- # set +x 00:16:43.023 20:59:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:43.023 20:59:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:43.280 20:59:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:43.280 20:59:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.280 20:59:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:43.280 20:59:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.280 20:59:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:43.538 [2024-06-09 20:59:11.652104] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.796 20:59:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:43.796 20:59:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:43.796 20:59:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.796 20:59:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:43.796 20:59:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:43.796 20:59:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.796 20:59:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:44.054 [2024-06-09 20:59:12.123720] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:44.054 [2024-06-09 20:59:12.123773] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.054 [2024-06-09 20:59:12.123850] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.054 [2024-06-09 20:59:12.191878] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.054 [2024-06-09 20:59:12.191915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:44.054 20:59:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:44.054 20:59:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.054 20:59:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.054 20:59:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:44.311 20:59:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:44.311 20:59:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:44.311 20:59:12 -- bdev/bdev_raid.sh@287 -- # killprocess 116794 00:16:44.311 20:59:12 -- common/autotest_common.sh@926 -- # '[' -z 116794 ']' 00:16:44.311 20:59:12 -- common/autotest_common.sh@930 -- # kill -0 116794 00:16:44.311 20:59:12 -- common/autotest_common.sh@931 -- # uname 00:16:44.311 20:59:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:44.311 20:59:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116794 00:16:44.311 20:59:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:44.311 20:59:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:44.311 20:59:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116794' 00:16:44.311 killing process with pid 116794 00:16:44.311 20:59:12 -- common/autotest_common.sh@945 -- # kill 116794 00:16:44.311 [2024-06-09 20:59:12.479149] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.311 [2024-06-09 20:59:12.479284] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.311 20:59:12 -- common/autotest_common.sh@950 -- # wait 116794 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:45.686 00:16:45.686 real 0m12.769s 00:16:45.686 user 0m22.390s 00:16:45.686 sys 0m1.549s 00:16:45.686 20:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.686 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:16:45.686 ************************************ 00:16:45.686 END TEST raid_state_function_test_sb 00:16:45.686 ************************************ 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:45.686 20:59:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:45.686 20:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:45.686 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:16:45.686 ************************************ 00:16:45.686 START TEST raid_superblock_test 00:16:45.686 ************************************ 00:16:45.686 20:59:13 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=117192 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:45.686 20:59:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117192 /var/tmp/spdk-raid.sock 00:16:45.686 20:59:13 -- common/autotest_common.sh@819 -- # '[' -z 117192 ']' 00:16:45.686 20:59:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:45.686 20:59:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:45.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:45.686 20:59:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:45.686 20:59:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:45.686 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:16:45.686 [2024-06-09 20:59:13.647123] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:45.686 [2024-06-09 20:59:13.647339] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117192 ] 00:16:45.686 [2024-06-09 20:59:13.816884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.945 [2024-06-09 20:59:14.044122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.203 [2024-06-09 20:59:14.210957] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.462 20:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:46.462 20:59:14 -- common/autotest_common.sh@852 -- # return 0 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:46.462 20:59:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:46.720 malloc1 00:16:46.720 20:59:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:46.979 [2024-06-09 20:59:14.976684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:46.979 [2024-06-09 20:59:14.976787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.979 [2024-06-09 20:59:14.976819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:46.979 [2024-06-09 20:59:14.976866] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.979 [2024-06-09 20:59:14.979375] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.979 [2024-06-09 20:59:14.979447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:46.979 pt1 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:46.979 20:59:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:47.237 malloc2 00:16:47.237 20:59:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.497 [2024-06-09 20:59:15.471857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.497 [2024-06-09 20:59:15.471952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.497 [2024-06-09 20:59:15.471994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:47.497 [2024-06-09 20:59:15.472046] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.497 [2024-06-09 20:59:15.474267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.497 [2024-06-09 20:59:15.474333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.497 pt2 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.497 20:59:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:47.755 malloc3 00:16:47.755 20:59:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:48.013 [2024-06-09 20:59:16.012021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:48.013 [2024-06-09 20:59:16.012119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.013 [2024-06-09 20:59:16.012165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:48.013 [2024-06-09 20:59:16.012210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.013 [2024-06-09 20:59:16.014574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.013 [2024-06-09 20:59:16.014647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:48.013 pt3 00:16:48.014 20:59:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:48.014 20:59:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:48.014 20:59:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:48.273 [2024-06-09 20:59:16.276127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.273 [2024-06-09 20:59:16.278137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.273 [2024-06-09 20:59:16.278225] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:48.273 [2024-06-09 20:59:16.278483] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:48.273 [2024-06-09 20:59:16.278508] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:48.273 [2024-06-09 20:59:16.278637] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:48.273 [2024-06-09 20:59:16.279078] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:48.273 [2024-06-09 20:59:16.279133] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:48.273 [2024-06-09 20:59:16.279305] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.273 20:59:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.531 20:59:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.531 "name": "raid_bdev1", 00:16:48.531 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:48.531 "strip_size_kb": 0, 00:16:48.531 "state": "online", 00:16:48.531 "raid_level": "raid1", 00:16:48.531 "superblock": true, 00:16:48.531 "num_base_bdevs": 3, 00:16:48.531 "num_base_bdevs_discovered": 3, 00:16:48.531 "num_base_bdevs_operational": 3, 00:16:48.531 "base_bdevs_list": [ 00:16:48.531 { 00:16:48.531 "name": "pt1", 00:16:48.531 "uuid": "e8e4c8d8-b469-536e-ad41-f85c067a436f", 00:16:48.531 "is_configured": true, 00:16:48.531 "data_offset": 2048, 00:16:48.531 "data_size": 63488 00:16:48.531 }, 00:16:48.531 { 00:16:48.531 "name": "pt2", 00:16:48.531 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:48.531 "is_configured": true, 00:16:48.531 "data_offset": 2048, 00:16:48.531 "data_size": 63488 00:16:48.531 }, 00:16:48.531 { 00:16:48.531 "name": "pt3", 00:16:48.531 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:48.531 "is_configured": true, 00:16:48.531 "data_offset": 2048, 00:16:48.531 "data_size": 63488 00:16:48.531 } 00:16:48.531 ] 00:16:48.531 }' 00:16:48.531 20:59:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.531 20:59:16 -- common/autotest_common.sh@10 -- # set +x 00:16:49.125 20:59:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:49.125 20:59:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:49.383 [2024-06-09 20:59:17.372498] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.383 20:59:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b0cc5d64-3a51-44d6-9167-95a2cea6bdcb 00:16:49.383 20:59:17 -- bdev/bdev_raid.sh@380 -- # '[' -z b0cc5d64-3a51-44d6-9167-95a2cea6bdcb ']' 00:16:49.383 20:59:17 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:49.641 [2024-06-09 20:59:17.572351] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.641 [2024-06-09 20:59:17.572377] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.641 [2024-06-09 20:59:17.572459] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.641 [2024-06-09 20:59:17.572545] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.641 [2024-06-09 20:59:17.572557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:49.641 20:59:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.641 20:59:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:49.899 20:59:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:49.899 20:59:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:49.899 20:59:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.899 20:59:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:49.899 20:59:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.899 20:59:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:50.158 20:59:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.158 20:59:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:50.417 20:59:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:50.417 20:59:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:50.675 20:59:18 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:50.676 20:59:18 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:50.676 20:59:18 -- common/autotest_common.sh@640 -- # local es=0 00:16:50.676 20:59:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:50.676 20:59:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.676 20:59:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:50.676 20:59:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.676 20:59:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:50.676 20:59:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.676 20:59:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:50.676 20:59:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.676 20:59:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:50.676 20:59:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:50.934 [2024-06-09 20:59:18.960740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:50.934 [2024-06-09 20:59:18.962867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:50.934 [2024-06-09 20:59:18.962963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:50.934 [2024-06-09 20:59:18.963024] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:50.934 [2024-06-09 20:59:18.963129] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:50.934 [2024-06-09 20:59:18.963214] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:50.934 [2024-06-09 20:59:18.963264] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.934 [2024-06-09 20:59:18.963277] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:16:50.934 request: 00:16:50.934 { 00:16:50.934 "name": "raid_bdev1", 00:16:50.934 "raid_level": "raid1", 00:16:50.934 "base_bdevs": [ 00:16:50.934 "malloc1", 00:16:50.934 "malloc2", 00:16:50.934 "malloc3" 00:16:50.934 ], 00:16:50.934 "superblock": false, 00:16:50.934 "method": "bdev_raid_create", 00:16:50.934 "req_id": 1 00:16:50.934 } 00:16:50.934 Got JSON-RPC error response 00:16:50.934 response: 00:16:50.934 { 00:16:50.934 "code": -17, 00:16:50.934 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:50.934 } 00:16:50.934 20:59:18 -- common/autotest_common.sh@643 -- # es=1 00:16:50.934 20:59:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:50.934 20:59:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:50.934 20:59:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:50.934 20:59:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.934 20:59:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:51.193 20:59:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:51.193 20:59:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:51.193 20:59:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.452 [2024-06-09 20:59:19.408723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.452 [2024-06-09 20:59:19.408816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.452 [2024-06-09 20:59:19.408854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:51.452 [2024-06-09 20:59:19.408875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.452 [2024-06-09 20:59:19.411164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.452 [2024-06-09 20:59:19.411231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.452 [2024-06-09 20:59:19.411349] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:51.452 [2024-06-09 20:59:19.411436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.452 pt1 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.452 20:59:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.711 20:59:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.711 "name": "raid_bdev1", 00:16:51.711 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:51.711 "strip_size_kb": 0, 00:16:51.711 "state": "configuring", 00:16:51.711 "raid_level": "raid1", 00:16:51.711 "superblock": true, 00:16:51.711 "num_base_bdevs": 3, 00:16:51.711 "num_base_bdevs_discovered": 1, 00:16:51.711 "num_base_bdevs_operational": 3, 00:16:51.711 "base_bdevs_list": [ 00:16:51.711 { 00:16:51.711 "name": "pt1", 00:16:51.711 "uuid": "e8e4c8d8-b469-536e-ad41-f85c067a436f", 00:16:51.711 "is_configured": true, 00:16:51.711 "data_offset": 2048, 00:16:51.711 "data_size": 63488 00:16:51.711 }, 00:16:51.711 { 00:16:51.711 "name": null, 00:16:51.711 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:51.711 "is_configured": false, 00:16:51.711 "data_offset": 2048, 00:16:51.711 "data_size": 63488 00:16:51.711 }, 00:16:51.711 { 00:16:51.711 "name": null, 00:16:51.711 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:51.711 "is_configured": false, 00:16:51.711 "data_offset": 2048, 00:16:51.711 "data_size": 63488 00:16:51.711 } 00:16:51.711 ] 00:16:51.711 }' 00:16:51.711 20:59:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.711 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:16:52.278 20:59:20 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:52.278 20:59:20 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:52.278 [2024-06-09 20:59:20.372930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:52.278 [2024-06-09 20:59:20.373029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.278 [2024-06-09 20:59:20.373076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:52.278 [2024-06-09 20:59:20.373097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.278 [2024-06-09 20:59:20.373627] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.278 [2024-06-09 20:59:20.373670] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:52.278 [2024-06-09 20:59:20.373826] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:52.278 [2024-06-09 20:59:20.373852] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.278 pt2 00:16:52.278 20:59:20 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:52.537 [2024-06-09 20:59:20.589032] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.537 20:59:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.795 20:59:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.795 "name": "raid_bdev1", 00:16:52.795 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:52.795 "strip_size_kb": 0, 00:16:52.795 "state": "configuring", 00:16:52.795 "raid_level": "raid1", 00:16:52.795 "superblock": true, 00:16:52.795 "num_base_bdevs": 3, 00:16:52.795 "num_base_bdevs_discovered": 1, 00:16:52.795 "num_base_bdevs_operational": 3, 00:16:52.795 "base_bdevs_list": [ 00:16:52.795 { 00:16:52.795 "name": "pt1", 00:16:52.795 "uuid": "e8e4c8d8-b469-536e-ad41-f85c067a436f", 00:16:52.795 "is_configured": true, 00:16:52.795 "data_offset": 2048, 00:16:52.795 "data_size": 63488 00:16:52.795 }, 00:16:52.795 { 00:16:52.795 "name": null, 00:16:52.795 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:52.795 "is_configured": false, 00:16:52.795 "data_offset": 2048, 00:16:52.795 "data_size": 63488 00:16:52.795 }, 00:16:52.795 { 00:16:52.795 "name": null, 00:16:52.795 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:52.795 "is_configured": false, 00:16:52.795 "data_offset": 2048, 00:16:52.795 "data_size": 63488 00:16:52.795 } 00:16:52.795 ] 00:16:52.795 }' 00:16:52.795 20:59:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.795 20:59:20 -- common/autotest_common.sh@10 -- # set +x 00:16:53.361 20:59:21 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:53.361 20:59:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:53.361 20:59:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:53.619 [2024-06-09 20:59:21.553158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:53.619 [2024-06-09 20:59:21.553470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.619 [2024-06-09 20:59:21.553669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:53.619 [2024-06-09 20:59:21.553808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.620 [2024-06-09 20:59:21.554463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.620 [2024-06-09 20:59:21.554637] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:53.620 [2024-06-09 20:59:21.554878] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:53.620 [2024-06-09 20:59:21.555015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:53.620 pt2 00:16:53.620 20:59:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:53.620 20:59:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:53.620 20:59:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:53.878 [2024-06-09 20:59:21.805339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:53.878 [2024-06-09 20:59:21.805689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.878 [2024-06-09 20:59:21.805899] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:53.878 [2024-06-09 20:59:21.806096] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.878 [2024-06-09 20:59:21.807003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.878 [2024-06-09 20:59:21.807234] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:53.878 [2024-06-09 20:59:21.807549] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:53.878 [2024-06-09 20:59:21.807748] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:53.878 [2024-06-09 20:59:21.808095] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:53.878 [2024-06-09 20:59:21.808264] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:53.878 [2024-06-09 20:59:21.808462] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:53.878 [2024-06-09 20:59:21.809073] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:53.878 [2024-06-09 20:59:21.809233] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:53.878 [2024-06-09 20:59:21.809643] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.878 pt3 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.878 20:59:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.878 20:59:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.878 "name": "raid_bdev1", 00:16:53.878 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:53.878 "strip_size_kb": 0, 00:16:53.878 "state": "online", 00:16:53.878 "raid_level": "raid1", 00:16:53.878 "superblock": true, 00:16:53.878 "num_base_bdevs": 3, 00:16:53.878 "num_base_bdevs_discovered": 3, 00:16:53.878 "num_base_bdevs_operational": 3, 00:16:53.878 "base_bdevs_list": [ 00:16:53.878 { 00:16:53.878 "name": "pt1", 00:16:53.878 "uuid": "e8e4c8d8-b469-536e-ad41-f85c067a436f", 00:16:53.878 "is_configured": true, 00:16:53.878 "data_offset": 2048, 00:16:53.878 "data_size": 63488 00:16:53.878 }, 00:16:53.878 { 00:16:53.878 "name": "pt2", 00:16:53.878 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:53.878 "is_configured": true, 00:16:53.878 "data_offset": 2048, 00:16:53.878 "data_size": 63488 00:16:53.879 }, 00:16:53.879 { 00:16:53.879 "name": "pt3", 00:16:53.879 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:53.879 "is_configured": true, 00:16:53.879 "data_offset": 2048, 00:16:53.879 "data_size": 63488 00:16:53.879 } 00:16:53.879 ] 00:16:53.879 }' 00:16:53.879 20:59:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.879 20:59:22 -- common/autotest_common.sh@10 -- # set +x 00:16:54.829 20:59:22 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:54.829 20:59:22 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:54.829 [2024-06-09 20:59:22.829910] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.829 20:59:22 -- bdev/bdev_raid.sh@430 -- # '[' b0cc5d64-3a51-44d6-9167-95a2cea6bdcb '!=' b0cc5d64-3a51-44d6-9167-95a2cea6bdcb ']' 00:16:54.829 20:59:22 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:54.829 20:59:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:54.829 20:59:22 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:54.829 20:59:22 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:55.087 [2024-06-09 20:59:23.021814] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.088 "name": "raid_bdev1", 00:16:55.088 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:55.088 "strip_size_kb": 0, 00:16:55.088 "state": "online", 00:16:55.088 "raid_level": "raid1", 00:16:55.088 "superblock": true, 00:16:55.088 "num_base_bdevs": 3, 00:16:55.088 "num_base_bdevs_discovered": 2, 00:16:55.088 "num_base_bdevs_operational": 2, 00:16:55.088 "base_bdevs_list": [ 00:16:55.088 { 00:16:55.088 "name": null, 00:16:55.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.088 "is_configured": false, 00:16:55.088 "data_offset": 2048, 00:16:55.088 "data_size": 63488 00:16:55.088 }, 00:16:55.088 { 00:16:55.088 "name": "pt2", 00:16:55.088 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:55.088 "is_configured": true, 00:16:55.088 "data_offset": 2048, 00:16:55.088 "data_size": 63488 00:16:55.088 }, 00:16:55.088 { 00:16:55.088 "name": "pt3", 00:16:55.088 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:55.088 "is_configured": true, 00:16:55.088 "data_offset": 2048, 00:16:55.088 "data_size": 63488 00:16:55.088 } 00:16:55.088 ] 00:16:55.088 }' 00:16:55.088 20:59:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.088 20:59:23 -- common/autotest_common.sh@10 -- # set +x 00:16:56.022 20:59:23 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:56.022 [2024-06-09 20:59:24.106022] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.022 [2024-06-09 20:59:24.106294] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.022 [2024-06-09 20:59:24.106475] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.022 [2024-06-09 20:59:24.106655] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.022 [2024-06-09 20:59:24.106756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:56.022 20:59:24 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.022 20:59:24 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:56.281 20:59:24 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:56.281 20:59:24 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:56.281 20:59:24 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:56.281 20:59:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:56.281 20:59:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:56.539 20:59:24 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.798 [2024-06-09 20:59:24.926150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.798 [2024-06-09 20:59:24.926378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.798 [2024-06-09 20:59:24.926458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:56.798 [2024-06-09 20:59:24.926676] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.798 [2024-06-09 20:59:24.929199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.798 [2024-06-09 20:59:24.929357] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.798 [2024-06-09 20:59:24.929611] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:56.798 [2024-06-09 20:59:24.929763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.798 pt2 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.798 20:59:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.057 20:59:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.057 "name": "raid_bdev1", 00:16:57.057 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:57.057 "strip_size_kb": 0, 00:16:57.057 "state": "configuring", 00:16:57.057 "raid_level": "raid1", 00:16:57.057 "superblock": true, 00:16:57.057 "num_base_bdevs": 3, 00:16:57.057 "num_base_bdevs_discovered": 1, 00:16:57.057 "num_base_bdevs_operational": 2, 00:16:57.057 "base_bdevs_list": [ 00:16:57.057 { 00:16:57.057 "name": null, 00:16:57.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.057 "is_configured": false, 00:16:57.057 "data_offset": 2048, 00:16:57.057 "data_size": 63488 00:16:57.057 }, 00:16:57.057 { 00:16:57.057 "name": "pt2", 00:16:57.057 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:57.057 "is_configured": true, 00:16:57.057 "data_offset": 2048, 00:16:57.057 "data_size": 63488 00:16:57.057 }, 00:16:57.057 { 00:16:57.057 "name": null, 00:16:57.057 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:57.057 "is_configured": false, 00:16:57.057 "data_offset": 2048, 00:16:57.057 "data_size": 63488 00:16:57.057 } 00:16:57.057 ] 00:16:57.057 }' 00:16:57.057 20:59:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.057 20:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:57.623 20:59:25 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:16:57.623 20:59:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:57.623 20:59:25 -- bdev/bdev_raid.sh@462 -- # i=2 00:16:57.623 20:59:25 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:57.882 [2024-06-09 20:59:25.930329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:57.882 [2024-06-09 20:59:25.930565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.882 [2024-06-09 20:59:25.930747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:57.882 [2024-06-09 20:59:25.930912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.882 [2024-06-09 20:59:25.931506] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.882 [2024-06-09 20:59:25.931656] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:57.882 [2024-06-09 20:59:25.931885] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:57.882 [2024-06-09 20:59:25.932013] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.882 [2024-06-09 20:59:25.932254] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:16:57.882 [2024-06-09 20:59:25.932366] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.882 [2024-06-09 20:59:25.932501] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:57.882 [2024-06-09 20:59:25.932987] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:16:57.882 [2024-06-09 20:59:25.933116] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:16:57.882 [2024-06-09 20:59:25.933341] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.882 pt3 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.882 20:59:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.141 20:59:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.141 "name": "raid_bdev1", 00:16:58.141 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:58.141 "strip_size_kb": 0, 00:16:58.141 "state": "online", 00:16:58.141 "raid_level": "raid1", 00:16:58.141 "superblock": true, 00:16:58.141 "num_base_bdevs": 3, 00:16:58.141 "num_base_bdevs_discovered": 2, 00:16:58.141 "num_base_bdevs_operational": 2, 00:16:58.141 "base_bdevs_list": [ 00:16:58.141 { 00:16:58.141 "name": null, 00:16:58.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.141 "is_configured": false, 00:16:58.141 "data_offset": 2048, 00:16:58.141 "data_size": 63488 00:16:58.141 }, 00:16:58.141 { 00:16:58.141 "name": "pt2", 00:16:58.141 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:58.141 "is_configured": true, 00:16:58.141 "data_offset": 2048, 00:16:58.141 "data_size": 63488 00:16:58.141 }, 00:16:58.141 { 00:16:58.141 "name": "pt3", 00:16:58.141 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:58.141 "is_configured": true, 00:16:58.141 "data_offset": 2048, 00:16:58.141 "data_size": 63488 00:16:58.141 } 00:16:58.141 ] 00:16:58.141 }' 00:16:58.141 20:59:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.141 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:16:58.769 20:59:26 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:16:58.769 20:59:26 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:59.028 [2024-06-09 20:59:26.978490] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.028 [2024-06-09 20:59:26.978640] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.028 [2024-06-09 20:59:26.978835] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.028 [2024-06-09 20:59:26.979037] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.028 [2024-06-09 20:59:26.979197] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:16:59.028 20:59:26 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:16:59.028 20:59:26 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.287 20:59:27 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:16:59.287 20:59:27 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:16:59.287 20:59:27 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.545 [2024-06-09 20:59:27.474585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.546 [2024-06-09 20:59:27.474935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.546 [2024-06-09 20:59:27.475171] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:59.546 [2024-06-09 20:59:27.475327] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.546 [2024-06-09 20:59:27.477914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.546 [2024-06-09 20:59:27.478088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.546 [2024-06-09 20:59:27.478365] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:59.546 [2024-06-09 20:59:27.478524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.546 pt1 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.546 20:59:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.804 20:59:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.804 "name": "raid_bdev1", 00:16:59.804 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:16:59.804 "strip_size_kb": 0, 00:16:59.804 "state": "configuring", 00:16:59.804 "raid_level": "raid1", 00:16:59.804 "superblock": true, 00:16:59.804 "num_base_bdevs": 3, 00:16:59.804 "num_base_bdevs_discovered": 1, 00:16:59.804 "num_base_bdevs_operational": 3, 00:16:59.805 "base_bdevs_list": [ 00:16:59.805 { 00:16:59.805 "name": "pt1", 00:16:59.805 "uuid": "e8e4c8d8-b469-536e-ad41-f85c067a436f", 00:16:59.805 "is_configured": true, 00:16:59.805 "data_offset": 2048, 00:16:59.805 "data_size": 63488 00:16:59.805 }, 00:16:59.805 { 00:16:59.805 "name": null, 00:16:59.805 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:16:59.805 "is_configured": false, 00:16:59.805 "data_offset": 2048, 00:16:59.805 "data_size": 63488 00:16:59.805 }, 00:16:59.805 { 00:16:59.805 "name": null, 00:16:59.805 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:16:59.805 "is_configured": false, 00:16:59.805 "data_offset": 2048, 00:16:59.805 "data_size": 63488 00:16:59.805 } 00:16:59.805 ] 00:16:59.805 }' 00:16:59.805 20:59:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.805 20:59:27 -- common/autotest_common.sh@10 -- # set +x 00:17:00.372 20:59:28 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:00.372 20:59:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:00.372 20:59:28 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:00.630 20:59:28 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:00.630 20:59:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:00.630 20:59:28 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:00.889 20:59:28 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:00.889 20:59:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:00.889 20:59:28 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:00.889 20:59:28 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:00.889 [2024-06-09 20:59:29.059170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:00.889 [2024-06-09 20:59:29.059384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.889 [2024-06-09 20:59:29.059452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:00.889 [2024-06-09 20:59:29.059633] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.889 [2024-06-09 20:59:29.060242] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.889 [2024-06-09 20:59:29.060425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:00.889 [2024-06-09 20:59:29.060622] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:00.889 [2024-06-09 20:59:29.060736] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:00.889 [2024-06-09 20:59:29.060830] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.889 [2024-06-09 20:59:29.060889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:17:00.889 [2024-06-09 20:59:29.061116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:00.889 pt3 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.148 "name": "raid_bdev1", 00:17:01.148 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:17:01.148 "strip_size_kb": 0, 00:17:01.148 "state": "configuring", 00:17:01.148 "raid_level": "raid1", 00:17:01.148 "superblock": true, 00:17:01.148 "num_base_bdevs": 3, 00:17:01.148 "num_base_bdevs_discovered": 1, 00:17:01.148 "num_base_bdevs_operational": 2, 00:17:01.148 "base_bdevs_list": [ 00:17:01.148 { 00:17:01.148 "name": null, 00:17:01.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.148 "is_configured": false, 00:17:01.148 "data_offset": 2048, 00:17:01.148 "data_size": 63488 00:17:01.148 }, 00:17:01.148 { 00:17:01.148 "name": null, 00:17:01.148 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:17:01.148 "is_configured": false, 00:17:01.148 "data_offset": 2048, 00:17:01.148 "data_size": 63488 00:17:01.148 }, 00:17:01.148 { 00:17:01.148 "name": "pt3", 00:17:01.148 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:17:01.148 "is_configured": true, 00:17:01.148 "data_offset": 2048, 00:17:01.148 "data_size": 63488 00:17:01.148 } 00:17:01.148 ] 00:17:01.148 }' 00:17:01.148 20:59:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.148 20:59:29 -- common/autotest_common.sh@10 -- # set +x 00:17:02.084 20:59:29 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:02.084 20:59:29 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:02.084 20:59:29 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.084 [2024-06-09 20:59:30.143487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.084 [2024-06-09 20:59:30.143739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.084 [2024-06-09 20:59:30.143899] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:02.084 [2024-06-09 20:59:30.144032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.084 [2024-06-09 20:59:30.144639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.084 [2024-06-09 20:59:30.144804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.084 [2024-06-09 20:59:30.144989] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:02.084 [2024-06-09 20:59:30.145114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.084 [2024-06-09 20:59:30.145349] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:17:02.084 [2024-06-09 20:59:30.145463] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.084 [2024-06-09 20:59:30.145653] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:02.084 [2024-06-09 20:59:30.146142] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:17:02.085 [2024-06-09 20:59:30.146267] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:17:02.085 [2024-06-09 20:59:30.146493] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.085 pt2 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.085 20:59:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.343 20:59:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.343 "name": "raid_bdev1", 00:17:02.343 "uuid": "b0cc5d64-3a51-44d6-9167-95a2cea6bdcb", 00:17:02.343 "strip_size_kb": 0, 00:17:02.343 "state": "online", 00:17:02.343 "raid_level": "raid1", 00:17:02.343 "superblock": true, 00:17:02.343 "num_base_bdevs": 3, 00:17:02.343 "num_base_bdevs_discovered": 2, 00:17:02.343 "num_base_bdevs_operational": 2, 00:17:02.343 "base_bdevs_list": [ 00:17:02.343 { 00:17:02.343 "name": null, 00:17:02.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.343 "is_configured": false, 00:17:02.343 "data_offset": 2048, 00:17:02.343 "data_size": 63488 00:17:02.343 }, 00:17:02.343 { 00:17:02.343 "name": "pt2", 00:17:02.343 "uuid": "e3006e69-4d13-5ae9-8380-aec1e92dc672", 00:17:02.343 "is_configured": true, 00:17:02.343 "data_offset": 2048, 00:17:02.343 "data_size": 63488 00:17:02.343 }, 00:17:02.343 { 00:17:02.343 "name": "pt3", 00:17:02.343 "uuid": "35793901-b307-52cb-82af-6562b1314d66", 00:17:02.343 "is_configured": true, 00:17:02.343 "data_offset": 2048, 00:17:02.343 "data_size": 63488 00:17:02.343 } 00:17:02.343 ] 00:17:02.343 }' 00:17:02.343 20:59:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.343 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:17:02.910 20:59:30 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:02.910 20:59:30 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:03.169 [2024-06-09 20:59:31.175843] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.169 20:59:31 -- bdev/bdev_raid.sh@506 -- # '[' b0cc5d64-3a51-44d6-9167-95a2cea6bdcb '!=' b0cc5d64-3a51-44d6-9167-95a2cea6bdcb ']' 00:17:03.169 20:59:31 -- bdev/bdev_raid.sh@511 -- # killprocess 117192 00:17:03.169 20:59:31 -- common/autotest_common.sh@926 -- # '[' -z 117192 ']' 00:17:03.169 20:59:31 -- common/autotest_common.sh@930 -- # kill -0 117192 00:17:03.169 20:59:31 -- common/autotest_common.sh@931 -- # uname 00:17:03.169 20:59:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.169 20:59:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117192 00:17:03.169 killing process with pid 117192 00:17:03.169 20:59:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:03.169 20:59:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:03.169 20:59:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117192' 00:17:03.169 20:59:31 -- common/autotest_common.sh@945 -- # kill 117192 00:17:03.169 [2024-06-09 20:59:31.217886] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.169 20:59:31 -- common/autotest_common.sh@950 -- # wait 117192 00:17:03.169 [2024-06-09 20:59:31.217957] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.169 [2024-06-09 20:59:31.218012] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.169 [2024-06-09 20:59:31.218021] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:17:03.427 [2024-06-09 20:59:31.419945] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.362 ************************************ 00:17:04.362 END TEST raid_superblock_test 00:17:04.362 ************************************ 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:04.362 00:17:04.362 real 0m18.856s 00:17:04.362 user 0m34.619s 00:17:04.362 sys 0m2.175s 00:17:04.362 20:59:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.362 20:59:32 -- common/autotest_common.sh@10 -- # set +x 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:04.362 20:59:32 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:04.362 20:59:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:04.362 20:59:32 -- common/autotest_common.sh@10 -- # set +x 00:17:04.362 ************************************ 00:17:04.362 START TEST raid_state_function_test 00:17:04.362 ************************************ 00:17:04.362 20:59:32 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=117794 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117794' 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:04.362 Process raid pid: 117794 00:17:04.362 20:59:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117794 /var/tmp/spdk-raid.sock 00:17:04.362 20:59:32 -- common/autotest_common.sh@819 -- # '[' -z 117794 ']' 00:17:04.362 20:59:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:04.362 20:59:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.362 20:59:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:04.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:04.362 20:59:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.362 20:59:32 -- common/autotest_common.sh@10 -- # set +x 00:17:04.619 [2024-06-09 20:59:32.571594] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:04.619 [2024-06-09 20:59:32.571986] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.619 [2024-06-09 20:59:32.736066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.878 [2024-06-09 20:59:32.928573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.136 [2024-06-09 20:59:33.119187] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.394 20:59:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:05.394 20:59:33 -- common/autotest_common.sh@852 -- # return 0 00:17:05.394 20:59:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:05.652 [2024-06-09 20:59:33.739719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.652 [2024-06-09 20:59:33.739984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.652 [2024-06-09 20:59:33.740111] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.652 [2024-06-09 20:59:33.740250] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.652 [2024-06-09 20:59:33.740347] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.652 [2024-06-09 20:59:33.740424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.652 [2024-06-09 20:59:33.740662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.652 [2024-06-09 20:59:33.740724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.652 20:59:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.910 20:59:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.910 "name": "Existed_Raid", 00:17:05.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.910 "strip_size_kb": 64, 00:17:05.910 "state": "configuring", 00:17:05.910 "raid_level": "raid0", 00:17:05.910 "superblock": false, 00:17:05.910 "num_base_bdevs": 4, 00:17:05.910 "num_base_bdevs_discovered": 0, 00:17:05.910 "num_base_bdevs_operational": 4, 00:17:05.910 "base_bdevs_list": [ 00:17:05.910 { 00:17:05.910 "name": "BaseBdev1", 00:17:05.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.910 "is_configured": false, 00:17:05.910 "data_offset": 0, 00:17:05.910 "data_size": 0 00:17:05.910 }, 00:17:05.910 { 00:17:05.910 "name": "BaseBdev2", 00:17:05.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.910 "is_configured": false, 00:17:05.910 "data_offset": 0, 00:17:05.910 "data_size": 0 00:17:05.910 }, 00:17:05.910 { 00:17:05.910 "name": "BaseBdev3", 00:17:05.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.910 "is_configured": false, 00:17:05.910 "data_offset": 0, 00:17:05.910 "data_size": 0 00:17:05.910 }, 00:17:05.910 { 00:17:05.910 "name": "BaseBdev4", 00:17:05.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.910 "is_configured": false, 00:17:05.910 "data_offset": 0, 00:17:05.910 "data_size": 0 00:17:05.910 } 00:17:05.910 ] 00:17:05.910 }' 00:17:05.910 20:59:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.910 20:59:33 -- common/autotest_common.sh@10 -- # set +x 00:17:06.478 20:59:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.736 [2024-06-09 20:59:34.871771] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.736 [2024-06-09 20:59:34.871961] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:06.736 20:59:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:06.994 [2024-06-09 20:59:35.123840] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.994 [2024-06-09 20:59:35.124041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.994 [2024-06-09 20:59:35.124143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.994 [2024-06-09 20:59:35.124209] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.994 [2024-06-09 20:59:35.124309] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.994 [2024-06-09 20:59:35.124385] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.994 [2024-06-09 20:59:35.124416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.994 [2024-06-09 20:59:35.124528] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.994 20:59:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:07.253 [2024-06-09 20:59:35.405694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.253 BaseBdev1 00:17:07.253 20:59:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:07.253 20:59:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:07.253 20:59:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:07.253 20:59:35 -- common/autotest_common.sh@889 -- # local i 00:17:07.253 20:59:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:07.253 20:59:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:07.253 20:59:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.511 20:59:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.769 [ 00:17:07.769 { 00:17:07.769 "name": "BaseBdev1", 00:17:07.769 "aliases": [ 00:17:07.769 "467385d8-fe7f-47b3-9140-07dc8b0351a3" 00:17:07.769 ], 00:17:07.769 "product_name": "Malloc disk", 00:17:07.769 "block_size": 512, 00:17:07.769 "num_blocks": 65536, 00:17:07.769 "uuid": "467385d8-fe7f-47b3-9140-07dc8b0351a3", 00:17:07.769 "assigned_rate_limits": { 00:17:07.769 "rw_ios_per_sec": 0, 00:17:07.769 "rw_mbytes_per_sec": 0, 00:17:07.769 "r_mbytes_per_sec": 0, 00:17:07.769 "w_mbytes_per_sec": 0 00:17:07.769 }, 00:17:07.769 "claimed": true, 00:17:07.769 "claim_type": "exclusive_write", 00:17:07.769 "zoned": false, 00:17:07.769 "supported_io_types": { 00:17:07.769 "read": true, 00:17:07.769 "write": true, 00:17:07.769 "unmap": true, 00:17:07.769 "write_zeroes": true, 00:17:07.769 "flush": true, 00:17:07.769 "reset": true, 00:17:07.769 "compare": false, 00:17:07.769 "compare_and_write": false, 00:17:07.769 "abort": true, 00:17:07.769 "nvme_admin": false, 00:17:07.769 "nvme_io": false 00:17:07.769 }, 00:17:07.769 "memory_domains": [ 00:17:07.769 { 00:17:07.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.769 "dma_device_type": 2 00:17:07.769 } 00:17:07.769 ], 00:17:07.769 "driver_specific": {} 00:17:07.769 } 00:17:07.769 ] 00:17:07.769 20:59:35 -- common/autotest_common.sh@895 -- # return 0 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.769 20:59:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.043 20:59:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.043 "name": "Existed_Raid", 00:17:08.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.043 "strip_size_kb": 64, 00:17:08.043 "state": "configuring", 00:17:08.043 "raid_level": "raid0", 00:17:08.043 "superblock": false, 00:17:08.043 "num_base_bdevs": 4, 00:17:08.043 "num_base_bdevs_discovered": 1, 00:17:08.043 "num_base_bdevs_operational": 4, 00:17:08.043 "base_bdevs_list": [ 00:17:08.043 { 00:17:08.043 "name": "BaseBdev1", 00:17:08.043 "uuid": "467385d8-fe7f-47b3-9140-07dc8b0351a3", 00:17:08.043 "is_configured": true, 00:17:08.043 "data_offset": 0, 00:17:08.043 "data_size": 65536 00:17:08.043 }, 00:17:08.043 { 00:17:08.043 "name": "BaseBdev2", 00:17:08.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.043 "is_configured": false, 00:17:08.043 "data_offset": 0, 00:17:08.043 "data_size": 0 00:17:08.043 }, 00:17:08.043 { 00:17:08.043 "name": "BaseBdev3", 00:17:08.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.043 "is_configured": false, 00:17:08.043 "data_offset": 0, 00:17:08.043 "data_size": 0 00:17:08.043 }, 00:17:08.043 { 00:17:08.043 "name": "BaseBdev4", 00:17:08.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.043 "is_configured": false, 00:17:08.043 "data_offset": 0, 00:17:08.043 "data_size": 0 00:17:08.043 } 00:17:08.043 ] 00:17:08.043 }' 00:17:08.043 20:59:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.043 20:59:36 -- common/autotest_common.sh@10 -- # set +x 00:17:08.629 20:59:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.887 [2024-06-09 20:59:36.918087] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.887 [2024-06-09 20:59:36.918295] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:08.887 20:59:36 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:08.887 20:59:36 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:09.145 [2024-06-09 20:59:37.126174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.145 [2024-06-09 20:59:37.128278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.145 [2024-06-09 20:59:37.128489] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.145 [2024-06-09 20:59:37.128597] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.145 [2024-06-09 20:59:37.128661] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.145 [2024-06-09 20:59:37.128751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:09.145 [2024-06-09 20:59:37.128898] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.145 20:59:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.404 20:59:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.404 "name": "Existed_Raid", 00:17:09.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.404 "strip_size_kb": 64, 00:17:09.404 "state": "configuring", 00:17:09.404 "raid_level": "raid0", 00:17:09.404 "superblock": false, 00:17:09.404 "num_base_bdevs": 4, 00:17:09.404 "num_base_bdevs_discovered": 1, 00:17:09.404 "num_base_bdevs_operational": 4, 00:17:09.404 "base_bdevs_list": [ 00:17:09.404 { 00:17:09.404 "name": "BaseBdev1", 00:17:09.404 "uuid": "467385d8-fe7f-47b3-9140-07dc8b0351a3", 00:17:09.404 "is_configured": true, 00:17:09.404 "data_offset": 0, 00:17:09.404 "data_size": 65536 00:17:09.404 }, 00:17:09.404 { 00:17:09.404 "name": "BaseBdev2", 00:17:09.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.404 "is_configured": false, 00:17:09.404 "data_offset": 0, 00:17:09.404 "data_size": 0 00:17:09.404 }, 00:17:09.404 { 00:17:09.404 "name": "BaseBdev3", 00:17:09.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.404 "is_configured": false, 00:17:09.404 "data_offset": 0, 00:17:09.404 "data_size": 0 00:17:09.404 }, 00:17:09.404 { 00:17:09.404 "name": "BaseBdev4", 00:17:09.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.404 "is_configured": false, 00:17:09.404 "data_offset": 0, 00:17:09.404 "data_size": 0 00:17:09.404 } 00:17:09.404 ] 00:17:09.404 }' 00:17:09.404 20:59:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.404 20:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:09.971 20:59:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.229 [2024-06-09 20:59:38.208007] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.229 BaseBdev2 00:17:10.229 20:59:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:10.229 20:59:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:10.229 20:59:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:10.229 20:59:38 -- common/autotest_common.sh@889 -- # local i 00:17:10.229 20:59:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:10.229 20:59:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:10.229 20:59:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.488 20:59:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.746 [ 00:17:10.746 { 00:17:10.746 "name": "BaseBdev2", 00:17:10.746 "aliases": [ 00:17:10.746 "2782898c-ba1e-4874-803e-6f0ffdb3a63f" 00:17:10.746 ], 00:17:10.746 "product_name": "Malloc disk", 00:17:10.746 "block_size": 512, 00:17:10.746 "num_blocks": 65536, 00:17:10.746 "uuid": "2782898c-ba1e-4874-803e-6f0ffdb3a63f", 00:17:10.746 "assigned_rate_limits": { 00:17:10.746 "rw_ios_per_sec": 0, 00:17:10.746 "rw_mbytes_per_sec": 0, 00:17:10.746 "r_mbytes_per_sec": 0, 00:17:10.746 "w_mbytes_per_sec": 0 00:17:10.746 }, 00:17:10.746 "claimed": true, 00:17:10.746 "claim_type": "exclusive_write", 00:17:10.746 "zoned": false, 00:17:10.746 "supported_io_types": { 00:17:10.746 "read": true, 00:17:10.746 "write": true, 00:17:10.746 "unmap": true, 00:17:10.746 "write_zeroes": true, 00:17:10.746 "flush": true, 00:17:10.746 "reset": true, 00:17:10.746 "compare": false, 00:17:10.746 "compare_and_write": false, 00:17:10.746 "abort": true, 00:17:10.746 "nvme_admin": false, 00:17:10.746 "nvme_io": false 00:17:10.746 }, 00:17:10.746 "memory_domains": [ 00:17:10.746 { 00:17:10.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.746 "dma_device_type": 2 00:17:10.746 } 00:17:10.746 ], 00:17:10.746 "driver_specific": {} 00:17:10.746 } 00:17:10.746 ] 00:17:10.746 20:59:38 -- common/autotest_common.sh@895 -- # return 0 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.747 "name": "Existed_Raid", 00:17:10.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.747 "strip_size_kb": 64, 00:17:10.747 "state": "configuring", 00:17:10.747 "raid_level": "raid0", 00:17:10.747 "superblock": false, 00:17:10.747 "num_base_bdevs": 4, 00:17:10.747 "num_base_bdevs_discovered": 2, 00:17:10.747 "num_base_bdevs_operational": 4, 00:17:10.747 "base_bdevs_list": [ 00:17:10.747 { 00:17:10.747 "name": "BaseBdev1", 00:17:10.747 "uuid": "467385d8-fe7f-47b3-9140-07dc8b0351a3", 00:17:10.747 "is_configured": true, 00:17:10.747 "data_offset": 0, 00:17:10.747 "data_size": 65536 00:17:10.747 }, 00:17:10.747 { 00:17:10.747 "name": "BaseBdev2", 00:17:10.747 "uuid": "2782898c-ba1e-4874-803e-6f0ffdb3a63f", 00:17:10.747 "is_configured": true, 00:17:10.747 "data_offset": 0, 00:17:10.747 "data_size": 65536 00:17:10.747 }, 00:17:10.747 { 00:17:10.747 "name": "BaseBdev3", 00:17:10.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.747 "is_configured": false, 00:17:10.747 "data_offset": 0, 00:17:10.747 "data_size": 0 00:17:10.747 }, 00:17:10.747 { 00:17:10.747 "name": "BaseBdev4", 00:17:10.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.747 "is_configured": false, 00:17:10.747 "data_offset": 0, 00:17:10.747 "data_size": 0 00:17:10.747 } 00:17:10.747 ] 00:17:10.747 }' 00:17:10.747 20:59:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.747 20:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:11.682 20:59:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:11.682 [2024-06-09 20:59:39.763627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.682 BaseBdev3 00:17:11.682 20:59:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:11.682 20:59:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:11.682 20:59:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:11.682 20:59:39 -- common/autotest_common.sh@889 -- # local i 00:17:11.682 20:59:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:11.682 20:59:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:11.682 20:59:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.940 20:59:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:12.199 [ 00:17:12.199 { 00:17:12.199 "name": "BaseBdev3", 00:17:12.199 "aliases": [ 00:17:12.199 "e83544c1-073a-4e38-a8f5-1f91b806f2b4" 00:17:12.199 ], 00:17:12.199 "product_name": "Malloc disk", 00:17:12.199 "block_size": 512, 00:17:12.199 "num_blocks": 65536, 00:17:12.199 "uuid": "e83544c1-073a-4e38-a8f5-1f91b806f2b4", 00:17:12.199 "assigned_rate_limits": { 00:17:12.199 "rw_ios_per_sec": 0, 00:17:12.199 "rw_mbytes_per_sec": 0, 00:17:12.199 "r_mbytes_per_sec": 0, 00:17:12.199 "w_mbytes_per_sec": 0 00:17:12.199 }, 00:17:12.199 "claimed": true, 00:17:12.199 "claim_type": "exclusive_write", 00:17:12.199 "zoned": false, 00:17:12.199 "supported_io_types": { 00:17:12.199 "read": true, 00:17:12.199 "write": true, 00:17:12.199 "unmap": true, 00:17:12.199 "write_zeroes": true, 00:17:12.199 "flush": true, 00:17:12.199 "reset": true, 00:17:12.199 "compare": false, 00:17:12.199 "compare_and_write": false, 00:17:12.199 "abort": true, 00:17:12.199 "nvme_admin": false, 00:17:12.199 "nvme_io": false 00:17:12.199 }, 00:17:12.199 "memory_domains": [ 00:17:12.199 { 00:17:12.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.199 "dma_device_type": 2 00:17:12.199 } 00:17:12.199 ], 00:17:12.199 "driver_specific": {} 00:17:12.199 } 00:17:12.199 ] 00:17:12.199 20:59:40 -- common/autotest_common.sh@895 -- # return 0 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.199 20:59:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.457 20:59:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.457 "name": "Existed_Raid", 00:17:12.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.457 "strip_size_kb": 64, 00:17:12.457 "state": "configuring", 00:17:12.457 "raid_level": "raid0", 00:17:12.457 "superblock": false, 00:17:12.457 "num_base_bdevs": 4, 00:17:12.457 "num_base_bdevs_discovered": 3, 00:17:12.457 "num_base_bdevs_operational": 4, 00:17:12.457 "base_bdevs_list": [ 00:17:12.457 { 00:17:12.457 "name": "BaseBdev1", 00:17:12.457 "uuid": "467385d8-fe7f-47b3-9140-07dc8b0351a3", 00:17:12.457 "is_configured": true, 00:17:12.457 "data_offset": 0, 00:17:12.457 "data_size": 65536 00:17:12.457 }, 00:17:12.457 { 00:17:12.457 "name": "BaseBdev2", 00:17:12.457 "uuid": "2782898c-ba1e-4874-803e-6f0ffdb3a63f", 00:17:12.457 "is_configured": true, 00:17:12.458 "data_offset": 0, 00:17:12.458 "data_size": 65536 00:17:12.458 }, 00:17:12.458 { 00:17:12.458 "name": "BaseBdev3", 00:17:12.458 "uuid": "e83544c1-073a-4e38-a8f5-1f91b806f2b4", 00:17:12.458 "is_configured": true, 00:17:12.458 "data_offset": 0, 00:17:12.458 "data_size": 65536 00:17:12.458 }, 00:17:12.458 { 00:17:12.458 "name": "BaseBdev4", 00:17:12.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.458 "is_configured": false, 00:17:12.458 "data_offset": 0, 00:17:12.458 "data_size": 0 00:17:12.458 } 00:17:12.458 ] 00:17:12.458 }' 00:17:12.458 20:59:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.458 20:59:40 -- common/autotest_common.sh@10 -- # set +x 00:17:13.024 20:59:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:13.283 [2024-06-09 20:59:41.245677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:13.283 [2024-06-09 20:59:41.245857] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:13.283 [2024-06-09 20:59:41.245911] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:13.283 [2024-06-09 20:59:41.246165] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:13.283 [2024-06-09 20:59:41.246640] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:13.283 [2024-06-09 20:59:41.246768] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:13.283 [2024-06-09 20:59:41.247152] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.283 BaseBdev4 00:17:13.283 20:59:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:13.283 20:59:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:13.283 20:59:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:13.283 20:59:41 -- common/autotest_common.sh@889 -- # local i 00:17:13.283 20:59:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:13.283 20:59:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:13.283 20:59:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.541 20:59:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:13.541 [ 00:17:13.541 { 00:17:13.541 "name": "BaseBdev4", 00:17:13.541 "aliases": [ 00:17:13.541 "de74f605-fe43-41d8-bb78-0d36c6b5dab2" 00:17:13.541 ], 00:17:13.541 "product_name": "Malloc disk", 00:17:13.541 "block_size": 512, 00:17:13.541 "num_blocks": 65536, 00:17:13.541 "uuid": "de74f605-fe43-41d8-bb78-0d36c6b5dab2", 00:17:13.541 "assigned_rate_limits": { 00:17:13.541 "rw_ios_per_sec": 0, 00:17:13.541 "rw_mbytes_per_sec": 0, 00:17:13.541 "r_mbytes_per_sec": 0, 00:17:13.541 "w_mbytes_per_sec": 0 00:17:13.541 }, 00:17:13.541 "claimed": true, 00:17:13.541 "claim_type": "exclusive_write", 00:17:13.541 "zoned": false, 00:17:13.541 "supported_io_types": { 00:17:13.541 "read": true, 00:17:13.541 "write": true, 00:17:13.541 "unmap": true, 00:17:13.541 "write_zeroes": true, 00:17:13.541 "flush": true, 00:17:13.541 "reset": true, 00:17:13.541 "compare": false, 00:17:13.541 "compare_and_write": false, 00:17:13.541 "abort": true, 00:17:13.541 "nvme_admin": false, 00:17:13.541 "nvme_io": false 00:17:13.541 }, 00:17:13.541 "memory_domains": [ 00:17:13.541 { 00:17:13.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.541 "dma_device_type": 2 00:17:13.541 } 00:17:13.541 ], 00:17:13.541 "driver_specific": {} 00:17:13.541 } 00:17:13.541 ] 00:17:13.541 20:59:41 -- common/autotest_common.sh@895 -- # return 0 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.541 20:59:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.799 20:59:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.799 20:59:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.799 20:59:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.799 20:59:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.799 20:59:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.799 "name": "Existed_Raid", 00:17:13.799 "uuid": "be1adc62-60ba-4394-b667-684c54cf35db", 00:17:13.799 "strip_size_kb": 64, 00:17:13.799 "state": "online", 00:17:13.799 "raid_level": "raid0", 00:17:13.799 "superblock": false, 00:17:13.799 "num_base_bdevs": 4, 00:17:13.799 "num_base_bdevs_discovered": 4, 00:17:13.799 "num_base_bdevs_operational": 4, 00:17:13.799 "base_bdevs_list": [ 00:17:13.799 { 00:17:13.799 "name": "BaseBdev1", 00:17:13.799 "uuid": "467385d8-fe7f-47b3-9140-07dc8b0351a3", 00:17:13.799 "is_configured": true, 00:17:13.799 "data_offset": 0, 00:17:13.799 "data_size": 65536 00:17:13.799 }, 00:17:13.799 { 00:17:13.799 "name": "BaseBdev2", 00:17:13.799 "uuid": "2782898c-ba1e-4874-803e-6f0ffdb3a63f", 00:17:13.799 "is_configured": true, 00:17:13.800 "data_offset": 0, 00:17:13.800 "data_size": 65536 00:17:13.800 }, 00:17:13.800 { 00:17:13.800 "name": "BaseBdev3", 00:17:13.800 "uuid": "e83544c1-073a-4e38-a8f5-1f91b806f2b4", 00:17:13.800 "is_configured": true, 00:17:13.800 "data_offset": 0, 00:17:13.800 "data_size": 65536 00:17:13.800 }, 00:17:13.800 { 00:17:13.800 "name": "BaseBdev4", 00:17:13.800 "uuid": "de74f605-fe43-41d8-bb78-0d36c6b5dab2", 00:17:13.800 "is_configured": true, 00:17:13.800 "data_offset": 0, 00:17:13.800 "data_size": 65536 00:17:13.800 } 00:17:13.800 ] 00:17:13.800 }' 00:17:13.800 20:59:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.800 20:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:14.366 20:59:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:14.624 [2024-06-09 20:59:42.774146] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.624 [2024-06-09 20:59:42.774295] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.624 [2024-06-09 20:59:42.774459] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.882 20:59:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.140 20:59:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.140 "name": "Existed_Raid", 00:17:15.140 "uuid": "be1adc62-60ba-4394-b667-684c54cf35db", 00:17:15.140 "strip_size_kb": 64, 00:17:15.140 "state": "offline", 00:17:15.140 "raid_level": "raid0", 00:17:15.140 "superblock": false, 00:17:15.140 "num_base_bdevs": 4, 00:17:15.140 "num_base_bdevs_discovered": 3, 00:17:15.140 "num_base_bdevs_operational": 3, 00:17:15.140 "base_bdevs_list": [ 00:17:15.140 { 00:17:15.140 "name": null, 00:17:15.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.140 "is_configured": false, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": "BaseBdev2", 00:17:15.140 "uuid": "2782898c-ba1e-4874-803e-6f0ffdb3a63f", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": "BaseBdev3", 00:17:15.140 "uuid": "e83544c1-073a-4e38-a8f5-1f91b806f2b4", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 }, 00:17:15.140 { 00:17:15.140 "name": "BaseBdev4", 00:17:15.140 "uuid": "de74f605-fe43-41d8-bb78-0d36c6b5dab2", 00:17:15.140 "is_configured": true, 00:17:15.140 "data_offset": 0, 00:17:15.140 "data_size": 65536 00:17:15.140 } 00:17:15.140 ] 00:17:15.140 }' 00:17:15.140 20:59:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.140 20:59:43 -- common/autotest_common.sh@10 -- # set +x 00:17:15.707 20:59:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:15.707 20:59:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.707 20:59:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.707 20:59:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.965 20:59:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:15.965 20:59:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.965 20:59:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:15.965 [2024-06-09 20:59:44.105391] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.224 20:59:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.224 20:59:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.224 20:59:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.224 20:59:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:16.483 20:59:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:16.483 20:59:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.483 20:59:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:16.742 [2024-06-09 20:59:44.692863] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:16.742 20:59:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.742 20:59:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.742 20:59:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.742 20:59:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:17.000 20:59:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:17.000 20:59:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.000 20:59:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:17.259 [2024-06-09 20:59:45.210423] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:17.259 [2024-06-09 20:59:45.210614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:17.259 20:59:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:17.259 20:59:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:17.259 20:59:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.259 20:59:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.518 20:59:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:17.518 20:59:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:17.518 20:59:45 -- bdev/bdev_raid.sh@287 -- # killprocess 117794 00:17:17.518 20:59:45 -- common/autotest_common.sh@926 -- # '[' -z 117794 ']' 00:17:17.518 20:59:45 -- common/autotest_common.sh@930 -- # kill -0 117794 00:17:17.518 20:59:45 -- common/autotest_common.sh@931 -- # uname 00:17:17.518 20:59:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.518 20:59:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117794 00:17:17.518 20:59:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:17.518 20:59:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:17.518 20:59:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117794' 00:17:17.518 killing process with pid 117794 00:17:17.518 20:59:45 -- common/autotest_common.sh@945 -- # kill 117794 00:17:17.518 [2024-06-09 20:59:45.600263] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.518 20:59:45 -- common/autotest_common.sh@950 -- # wait 117794 00:17:17.518 [2024-06-09 20:59:45.600507] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.480 ************************************ 00:17:18.480 END TEST raid_state_function_test 00:17:18.480 ************************************ 00:17:18.480 20:59:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:18.480 00:17:18.480 real 0m14.138s 00:17:18.480 user 0m25.097s 00:17:18.480 sys 0m1.705s 00:17:18.480 20:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.480 20:59:46 -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:18.739 20:59:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:18.739 20:59:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:18.739 20:59:46 -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 ************************************ 00:17:18.739 START TEST raid_state_function_test_sb 00:17:18.739 ************************************ 00:17:18.739 20:59:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=118240 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118240' 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:18.739 Process raid pid: 118240 00:17:18.739 20:59:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118240 /var/tmp/spdk-raid.sock 00:17:18.739 20:59:46 -- common/autotest_common.sh@819 -- # '[' -z 118240 ']' 00:17:18.739 20:59:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:18.739 20:59:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:18.739 20:59:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:18.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:18.739 20:59:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:18.739 20:59:46 -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 [2024-06-09 20:59:46.778726] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:18.740 [2024-06-09 20:59:46.779109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.998 [2024-06-09 20:59:46.942505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.998 [2024-06-09 20:59:47.136687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.256 [2024-06-09 20:59:47.329110] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.824 20:59:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.824 20:59:47 -- common/autotest_common.sh@852 -- # return 0 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:19.824 [2024-06-09 20:59:47.933185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.824 [2024-06-09 20:59:47.933441] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.824 [2024-06-09 20:59:47.933588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.824 [2024-06-09 20:59:47.933658] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.824 [2024-06-09 20:59:47.933756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.824 [2024-06-09 20:59:47.933836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.824 [2024-06-09 20:59:47.933870] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.824 [2024-06-09 20:59:47.934024] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.824 20:59:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.082 20:59:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.082 "name": "Existed_Raid", 00:17:20.082 "uuid": "36d24653-b42b-43c5-ba8c-f151a95d9d14", 00:17:20.082 "strip_size_kb": 64, 00:17:20.082 "state": "configuring", 00:17:20.082 "raid_level": "raid0", 00:17:20.082 "superblock": true, 00:17:20.082 "num_base_bdevs": 4, 00:17:20.082 "num_base_bdevs_discovered": 0, 00:17:20.082 "num_base_bdevs_operational": 4, 00:17:20.082 "base_bdevs_list": [ 00:17:20.082 { 00:17:20.082 "name": "BaseBdev1", 00:17:20.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.082 "is_configured": false, 00:17:20.082 "data_offset": 0, 00:17:20.082 "data_size": 0 00:17:20.082 }, 00:17:20.082 { 00:17:20.082 "name": "BaseBdev2", 00:17:20.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.082 "is_configured": false, 00:17:20.082 "data_offset": 0, 00:17:20.082 "data_size": 0 00:17:20.082 }, 00:17:20.082 { 00:17:20.082 "name": "BaseBdev3", 00:17:20.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.082 "is_configured": false, 00:17:20.082 "data_offset": 0, 00:17:20.082 "data_size": 0 00:17:20.082 }, 00:17:20.082 { 00:17:20.082 "name": "BaseBdev4", 00:17:20.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.082 "is_configured": false, 00:17:20.082 "data_offset": 0, 00:17:20.082 "data_size": 0 00:17:20.082 } 00:17:20.082 ] 00:17:20.082 }' 00:17:20.082 20:59:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.082 20:59:48 -- common/autotest_common.sh@10 -- # set +x 00:17:20.650 20:59:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.908 [2024-06-09 20:59:48.985208] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.908 [2024-06-09 20:59:48.985385] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:20.908 20:59:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:21.167 [2024-06-09 20:59:49.245316] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:21.167 [2024-06-09 20:59:49.245551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:21.167 [2024-06-09 20:59:49.245657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.167 [2024-06-09 20:59:49.245725] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.167 [2024-06-09 20:59:49.245816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.167 [2024-06-09 20:59:49.245997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.167 [2024-06-09 20:59:49.246102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:21.167 [2024-06-09 20:59:49.246165] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:21.167 20:59:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.425 [2024-06-09 20:59:49.526862] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.425 BaseBdev1 00:17:21.425 20:59:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:21.425 20:59:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:21.425 20:59:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:21.425 20:59:49 -- common/autotest_common.sh@889 -- # local i 00:17:21.425 20:59:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:21.426 20:59:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:21.426 20:59:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.684 20:59:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.943 [ 00:17:21.943 { 00:17:21.943 "name": "BaseBdev1", 00:17:21.943 "aliases": [ 00:17:21.943 "ae837ab2-149f-4de7-85d7-e8f37460c12f" 00:17:21.943 ], 00:17:21.943 "product_name": "Malloc disk", 00:17:21.943 "block_size": 512, 00:17:21.943 "num_blocks": 65536, 00:17:21.943 "uuid": "ae837ab2-149f-4de7-85d7-e8f37460c12f", 00:17:21.943 "assigned_rate_limits": { 00:17:21.943 "rw_ios_per_sec": 0, 00:17:21.943 "rw_mbytes_per_sec": 0, 00:17:21.943 "r_mbytes_per_sec": 0, 00:17:21.943 "w_mbytes_per_sec": 0 00:17:21.943 }, 00:17:21.943 "claimed": true, 00:17:21.943 "claim_type": "exclusive_write", 00:17:21.943 "zoned": false, 00:17:21.943 "supported_io_types": { 00:17:21.943 "read": true, 00:17:21.943 "write": true, 00:17:21.943 "unmap": true, 00:17:21.943 "write_zeroes": true, 00:17:21.943 "flush": true, 00:17:21.943 "reset": true, 00:17:21.943 "compare": false, 00:17:21.943 "compare_and_write": false, 00:17:21.943 "abort": true, 00:17:21.943 "nvme_admin": false, 00:17:21.943 "nvme_io": false 00:17:21.943 }, 00:17:21.943 "memory_domains": [ 00:17:21.943 { 00:17:21.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.943 "dma_device_type": 2 00:17:21.943 } 00:17:21.943 ], 00:17:21.943 "driver_specific": {} 00:17:21.943 } 00:17:21.943 ] 00:17:21.943 20:59:49 -- common/autotest_common.sh@895 -- # return 0 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.943 20:59:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.943 20:59:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.943 "name": "Existed_Raid", 00:17:21.943 "uuid": "5455a582-9844-4c17-b7fb-9b05cb06d2e6", 00:17:21.943 "strip_size_kb": 64, 00:17:21.943 "state": "configuring", 00:17:21.943 "raid_level": "raid0", 00:17:21.943 "superblock": true, 00:17:21.943 "num_base_bdevs": 4, 00:17:21.943 "num_base_bdevs_discovered": 1, 00:17:21.943 "num_base_bdevs_operational": 4, 00:17:21.943 "base_bdevs_list": [ 00:17:21.943 { 00:17:21.943 "name": "BaseBdev1", 00:17:21.943 "uuid": "ae837ab2-149f-4de7-85d7-e8f37460c12f", 00:17:21.943 "is_configured": true, 00:17:21.943 "data_offset": 2048, 00:17:21.943 "data_size": 63488 00:17:21.943 }, 00:17:21.943 { 00:17:21.943 "name": "BaseBdev2", 00:17:21.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.943 "is_configured": false, 00:17:21.943 "data_offset": 0, 00:17:21.943 "data_size": 0 00:17:21.943 }, 00:17:21.943 { 00:17:21.943 "name": "BaseBdev3", 00:17:21.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.943 "is_configured": false, 00:17:21.943 "data_offset": 0, 00:17:21.943 "data_size": 0 00:17:21.943 }, 00:17:21.943 { 00:17:21.943 "name": "BaseBdev4", 00:17:21.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.943 "is_configured": false, 00:17:21.943 "data_offset": 0, 00:17:21.943 "data_size": 0 00:17:21.943 } 00:17:21.943 ] 00:17:21.943 }' 00:17:21.943 20:59:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.943 20:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.878 20:59:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.878 [2024-06-09 20:59:50.939215] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.878 [2024-06-09 20:59:50.939408] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:22.878 20:59:50 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:22.878 20:59:50 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:23.137 20:59:51 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:23.395 BaseBdev1 00:17:23.395 20:59:51 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:23.395 20:59:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:23.395 20:59:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:23.395 20:59:51 -- common/autotest_common.sh@889 -- # local i 00:17:23.395 20:59:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:23.395 20:59:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:23.395 20:59:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.653 20:59:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:23.910 [ 00:17:23.910 { 00:17:23.910 "name": "BaseBdev1", 00:17:23.910 "aliases": [ 00:17:23.910 "9ac365d2-6edd-47b9-b81f-d25d9d2ada06" 00:17:23.910 ], 00:17:23.910 "product_name": "Malloc disk", 00:17:23.910 "block_size": 512, 00:17:23.910 "num_blocks": 65536, 00:17:23.910 "uuid": "9ac365d2-6edd-47b9-b81f-d25d9d2ada06", 00:17:23.910 "assigned_rate_limits": { 00:17:23.910 "rw_ios_per_sec": 0, 00:17:23.910 "rw_mbytes_per_sec": 0, 00:17:23.910 "r_mbytes_per_sec": 0, 00:17:23.910 "w_mbytes_per_sec": 0 00:17:23.910 }, 00:17:23.910 "claimed": false, 00:17:23.910 "zoned": false, 00:17:23.910 "supported_io_types": { 00:17:23.910 "read": true, 00:17:23.910 "write": true, 00:17:23.910 "unmap": true, 00:17:23.910 "write_zeroes": true, 00:17:23.910 "flush": true, 00:17:23.910 "reset": true, 00:17:23.910 "compare": false, 00:17:23.910 "compare_and_write": false, 00:17:23.910 "abort": true, 00:17:23.910 "nvme_admin": false, 00:17:23.910 "nvme_io": false 00:17:23.910 }, 00:17:23.910 "memory_domains": [ 00:17:23.910 { 00:17:23.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.910 "dma_device_type": 2 00:17:23.910 } 00:17:23.910 ], 00:17:23.910 "driver_specific": {} 00:17:23.910 } 00:17:23.910 ] 00:17:23.910 20:59:51 -- common/autotest_common.sh@895 -- # return 0 00:17:23.910 20:59:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:24.167 [2024-06-09 20:59:52.092412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.167 [2024-06-09 20:59:52.094340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.167 [2024-06-09 20:59:52.094554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.167 [2024-06-09 20:59:52.094668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.167 [2024-06-09 20:59:52.094733] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.167 [2024-06-09 20:59:52.094867] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.167 [2024-06-09 20:59:52.094931] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.167 20:59:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.167 "name": "Existed_Raid", 00:17:24.167 "uuid": "b12ca0c2-63e8-458f-8933-d6348f20dd09", 00:17:24.167 "strip_size_kb": 64, 00:17:24.167 "state": "configuring", 00:17:24.167 "raid_level": "raid0", 00:17:24.167 "superblock": true, 00:17:24.167 "num_base_bdevs": 4, 00:17:24.167 "num_base_bdevs_discovered": 1, 00:17:24.167 "num_base_bdevs_operational": 4, 00:17:24.167 "base_bdevs_list": [ 00:17:24.167 { 00:17:24.167 "name": "BaseBdev1", 00:17:24.167 "uuid": "9ac365d2-6edd-47b9-b81f-d25d9d2ada06", 00:17:24.167 "is_configured": true, 00:17:24.167 "data_offset": 2048, 00:17:24.167 "data_size": 63488 00:17:24.167 }, 00:17:24.167 { 00:17:24.167 "name": "BaseBdev2", 00:17:24.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.167 "is_configured": false, 00:17:24.167 "data_offset": 0, 00:17:24.167 "data_size": 0 00:17:24.167 }, 00:17:24.167 { 00:17:24.167 "name": "BaseBdev3", 00:17:24.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.167 "is_configured": false, 00:17:24.167 "data_offset": 0, 00:17:24.167 "data_size": 0 00:17:24.167 }, 00:17:24.167 { 00:17:24.167 "name": "BaseBdev4", 00:17:24.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.167 "is_configured": false, 00:17:24.167 "data_offset": 0, 00:17:24.168 "data_size": 0 00:17:24.168 } 00:17:24.168 ] 00:17:24.168 }' 00:17:24.168 20:59:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.168 20:59:52 -- common/autotest_common.sh@10 -- # set +x 00:17:24.732 20:59:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:24.991 [2024-06-09 20:59:53.108737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.991 BaseBdev2 00:17:24.991 20:59:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:24.991 20:59:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:24.991 20:59:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:24.991 20:59:53 -- common/autotest_common.sh@889 -- # local i 00:17:24.991 20:59:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:24.991 20:59:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:24.991 20:59:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.249 20:59:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.507 [ 00:17:25.507 { 00:17:25.507 "name": "BaseBdev2", 00:17:25.507 "aliases": [ 00:17:25.507 "23d0f60e-27c6-4f1f-b7b1-1b5cac96e3cc" 00:17:25.507 ], 00:17:25.507 "product_name": "Malloc disk", 00:17:25.507 "block_size": 512, 00:17:25.507 "num_blocks": 65536, 00:17:25.507 "uuid": "23d0f60e-27c6-4f1f-b7b1-1b5cac96e3cc", 00:17:25.507 "assigned_rate_limits": { 00:17:25.507 "rw_ios_per_sec": 0, 00:17:25.507 "rw_mbytes_per_sec": 0, 00:17:25.507 "r_mbytes_per_sec": 0, 00:17:25.507 "w_mbytes_per_sec": 0 00:17:25.507 }, 00:17:25.507 "claimed": true, 00:17:25.507 "claim_type": "exclusive_write", 00:17:25.507 "zoned": false, 00:17:25.507 "supported_io_types": { 00:17:25.507 "read": true, 00:17:25.507 "write": true, 00:17:25.507 "unmap": true, 00:17:25.507 "write_zeroes": true, 00:17:25.507 "flush": true, 00:17:25.507 "reset": true, 00:17:25.507 "compare": false, 00:17:25.507 "compare_and_write": false, 00:17:25.507 "abort": true, 00:17:25.507 "nvme_admin": false, 00:17:25.507 "nvme_io": false 00:17:25.507 }, 00:17:25.507 "memory_domains": [ 00:17:25.507 { 00:17:25.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.507 "dma_device_type": 2 00:17:25.507 } 00:17:25.507 ], 00:17:25.507 "driver_specific": {} 00:17:25.507 } 00:17:25.507 ] 00:17:25.507 20:59:53 -- common/autotest_common.sh@895 -- # return 0 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.507 20:59:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.765 20:59:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.765 "name": "Existed_Raid", 00:17:25.765 "uuid": "b12ca0c2-63e8-458f-8933-d6348f20dd09", 00:17:25.765 "strip_size_kb": 64, 00:17:25.765 "state": "configuring", 00:17:25.765 "raid_level": "raid0", 00:17:25.765 "superblock": true, 00:17:25.765 "num_base_bdevs": 4, 00:17:25.765 "num_base_bdevs_discovered": 2, 00:17:25.765 "num_base_bdevs_operational": 4, 00:17:25.765 "base_bdevs_list": [ 00:17:25.765 { 00:17:25.765 "name": "BaseBdev1", 00:17:25.765 "uuid": "9ac365d2-6edd-47b9-b81f-d25d9d2ada06", 00:17:25.765 "is_configured": true, 00:17:25.765 "data_offset": 2048, 00:17:25.765 "data_size": 63488 00:17:25.765 }, 00:17:25.765 { 00:17:25.765 "name": "BaseBdev2", 00:17:25.765 "uuid": "23d0f60e-27c6-4f1f-b7b1-1b5cac96e3cc", 00:17:25.765 "is_configured": true, 00:17:25.765 "data_offset": 2048, 00:17:25.765 "data_size": 63488 00:17:25.765 }, 00:17:25.765 { 00:17:25.765 "name": "BaseBdev3", 00:17:25.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.765 "is_configured": false, 00:17:25.765 "data_offset": 0, 00:17:25.765 "data_size": 0 00:17:25.765 }, 00:17:25.765 { 00:17:25.765 "name": "BaseBdev4", 00:17:25.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.765 "is_configured": false, 00:17:25.765 "data_offset": 0, 00:17:25.765 "data_size": 0 00:17:25.765 } 00:17:25.765 ] 00:17:25.765 }' 00:17:25.765 20:59:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.765 20:59:53 -- common/autotest_common.sh@10 -- # set +x 00:17:26.332 20:59:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:26.591 [2024-06-09 20:59:54.568593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.591 BaseBdev3 00:17:26.591 20:59:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:26.591 20:59:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:26.591 20:59:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:26.591 20:59:54 -- common/autotest_common.sh@889 -- # local i 00:17:26.591 20:59:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:26.591 20:59:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:26.591 20:59:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:26.850 20:59:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:27.109 [ 00:17:27.109 { 00:17:27.109 "name": "BaseBdev3", 00:17:27.109 "aliases": [ 00:17:27.109 "5d41ecc4-9a41-446f-bef1-13f83a19655d" 00:17:27.109 ], 00:17:27.109 "product_name": "Malloc disk", 00:17:27.109 "block_size": 512, 00:17:27.109 "num_blocks": 65536, 00:17:27.109 "uuid": "5d41ecc4-9a41-446f-bef1-13f83a19655d", 00:17:27.109 "assigned_rate_limits": { 00:17:27.109 "rw_ios_per_sec": 0, 00:17:27.109 "rw_mbytes_per_sec": 0, 00:17:27.109 "r_mbytes_per_sec": 0, 00:17:27.109 "w_mbytes_per_sec": 0 00:17:27.109 }, 00:17:27.109 "claimed": true, 00:17:27.109 "claim_type": "exclusive_write", 00:17:27.109 "zoned": false, 00:17:27.109 "supported_io_types": { 00:17:27.109 "read": true, 00:17:27.109 "write": true, 00:17:27.109 "unmap": true, 00:17:27.109 "write_zeroes": true, 00:17:27.109 "flush": true, 00:17:27.109 "reset": true, 00:17:27.109 "compare": false, 00:17:27.109 "compare_and_write": false, 00:17:27.109 "abort": true, 00:17:27.109 "nvme_admin": false, 00:17:27.109 "nvme_io": false 00:17:27.109 }, 00:17:27.109 "memory_domains": [ 00:17:27.109 { 00:17:27.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.109 "dma_device_type": 2 00:17:27.109 } 00:17:27.109 ], 00:17:27.109 "driver_specific": {} 00:17:27.109 } 00:17:27.109 ] 00:17:27.109 20:59:55 -- common/autotest_common.sh@895 -- # return 0 00:17:27.109 20:59:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.110 20:59:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.368 20:59:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.368 "name": "Existed_Raid", 00:17:27.368 "uuid": "b12ca0c2-63e8-458f-8933-d6348f20dd09", 00:17:27.368 "strip_size_kb": 64, 00:17:27.368 "state": "configuring", 00:17:27.368 "raid_level": "raid0", 00:17:27.368 "superblock": true, 00:17:27.368 "num_base_bdevs": 4, 00:17:27.368 "num_base_bdevs_discovered": 3, 00:17:27.368 "num_base_bdevs_operational": 4, 00:17:27.368 "base_bdevs_list": [ 00:17:27.368 { 00:17:27.368 "name": "BaseBdev1", 00:17:27.368 "uuid": "9ac365d2-6edd-47b9-b81f-d25d9d2ada06", 00:17:27.368 "is_configured": true, 00:17:27.368 "data_offset": 2048, 00:17:27.368 "data_size": 63488 00:17:27.368 }, 00:17:27.368 { 00:17:27.368 "name": "BaseBdev2", 00:17:27.368 "uuid": "23d0f60e-27c6-4f1f-b7b1-1b5cac96e3cc", 00:17:27.368 "is_configured": true, 00:17:27.368 "data_offset": 2048, 00:17:27.368 "data_size": 63488 00:17:27.368 }, 00:17:27.368 { 00:17:27.368 "name": "BaseBdev3", 00:17:27.368 "uuid": "5d41ecc4-9a41-446f-bef1-13f83a19655d", 00:17:27.368 "is_configured": true, 00:17:27.368 "data_offset": 2048, 00:17:27.368 "data_size": 63488 00:17:27.368 }, 00:17:27.368 { 00:17:27.368 "name": "BaseBdev4", 00:17:27.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.369 "is_configured": false, 00:17:27.369 "data_offset": 0, 00:17:27.369 "data_size": 0 00:17:27.369 } 00:17:27.369 ] 00:17:27.369 }' 00:17:27.369 20:59:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.369 20:59:55 -- common/autotest_common.sh@10 -- # set +x 00:17:27.966 20:59:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:28.224 [2024-06-09 20:59:56.198099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:28.224 [2024-06-09 20:59:56.198627] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:28.224 [2024-06-09 20:59:56.198755] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:28.224 [2024-06-09 20:59:56.198993] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:28.224 [2024-06-09 20:59:56.199457] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:28.224 BaseBdev4 00:17:28.224 [2024-06-09 20:59:56.199598] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:28.224 [2024-06-09 20:59:56.199883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.224 20:59:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:28.224 20:59:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:28.224 20:59:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:28.224 20:59:56 -- common/autotest_common.sh@889 -- # local i 00:17:28.224 20:59:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:28.224 20:59:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:28.224 20:59:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.482 20:59:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:28.482 [ 00:17:28.482 { 00:17:28.482 "name": "BaseBdev4", 00:17:28.482 "aliases": [ 00:17:28.482 "82a15a4a-d975-459b-bac8-f27b8614bdfd" 00:17:28.482 ], 00:17:28.482 "product_name": "Malloc disk", 00:17:28.482 "block_size": 512, 00:17:28.482 "num_blocks": 65536, 00:17:28.482 "uuid": "82a15a4a-d975-459b-bac8-f27b8614bdfd", 00:17:28.482 "assigned_rate_limits": { 00:17:28.482 "rw_ios_per_sec": 0, 00:17:28.482 "rw_mbytes_per_sec": 0, 00:17:28.482 "r_mbytes_per_sec": 0, 00:17:28.482 "w_mbytes_per_sec": 0 00:17:28.482 }, 00:17:28.482 "claimed": true, 00:17:28.482 "claim_type": "exclusive_write", 00:17:28.482 "zoned": false, 00:17:28.482 "supported_io_types": { 00:17:28.482 "read": true, 00:17:28.482 "write": true, 00:17:28.482 "unmap": true, 00:17:28.482 "write_zeroes": true, 00:17:28.482 "flush": true, 00:17:28.482 "reset": true, 00:17:28.482 "compare": false, 00:17:28.482 "compare_and_write": false, 00:17:28.482 "abort": true, 00:17:28.482 "nvme_admin": false, 00:17:28.482 "nvme_io": false 00:17:28.482 }, 00:17:28.482 "memory_domains": [ 00:17:28.482 { 00:17:28.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.482 "dma_device_type": 2 00:17:28.482 } 00:17:28.482 ], 00:17:28.482 "driver_specific": {} 00:17:28.482 } 00:17:28.482 ] 00:17:28.482 20:59:56 -- common/autotest_common.sh@895 -- # return 0 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.482 20:59:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.740 20:59:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.740 20:59:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.740 20:59:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.740 "name": "Existed_Raid", 00:17:28.740 "uuid": "b12ca0c2-63e8-458f-8933-d6348f20dd09", 00:17:28.740 "strip_size_kb": 64, 00:17:28.740 "state": "online", 00:17:28.740 "raid_level": "raid0", 00:17:28.740 "superblock": true, 00:17:28.740 "num_base_bdevs": 4, 00:17:28.740 "num_base_bdevs_discovered": 4, 00:17:28.740 "num_base_bdevs_operational": 4, 00:17:28.740 "base_bdevs_list": [ 00:17:28.740 { 00:17:28.740 "name": "BaseBdev1", 00:17:28.740 "uuid": "9ac365d2-6edd-47b9-b81f-d25d9d2ada06", 00:17:28.740 "is_configured": true, 00:17:28.740 "data_offset": 2048, 00:17:28.740 "data_size": 63488 00:17:28.740 }, 00:17:28.740 { 00:17:28.740 "name": "BaseBdev2", 00:17:28.740 "uuid": "23d0f60e-27c6-4f1f-b7b1-1b5cac96e3cc", 00:17:28.740 "is_configured": true, 00:17:28.740 "data_offset": 2048, 00:17:28.740 "data_size": 63488 00:17:28.740 }, 00:17:28.740 { 00:17:28.740 "name": "BaseBdev3", 00:17:28.740 "uuid": "5d41ecc4-9a41-446f-bef1-13f83a19655d", 00:17:28.740 "is_configured": true, 00:17:28.740 "data_offset": 2048, 00:17:28.740 "data_size": 63488 00:17:28.740 }, 00:17:28.740 { 00:17:28.740 "name": "BaseBdev4", 00:17:28.740 "uuid": "82a15a4a-d975-459b-bac8-f27b8614bdfd", 00:17:28.740 "is_configured": true, 00:17:28.740 "data_offset": 2048, 00:17:28.740 "data_size": 63488 00:17:28.740 } 00:17:28.740 ] 00:17:28.740 }' 00:17:28.740 20:59:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.740 20:59:56 -- common/autotest_common.sh@10 -- # set +x 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:29.674 [2024-06-09 20:59:57.690525] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.674 [2024-06-09 20:59:57.690773] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.674 [2024-06-09 20:59:57.690968] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.674 20:59:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.932 20:59:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.932 "name": "Existed_Raid", 00:17:29.932 "uuid": "b12ca0c2-63e8-458f-8933-d6348f20dd09", 00:17:29.932 "strip_size_kb": 64, 00:17:29.932 "state": "offline", 00:17:29.932 "raid_level": "raid0", 00:17:29.932 "superblock": true, 00:17:29.932 "num_base_bdevs": 4, 00:17:29.932 "num_base_bdevs_discovered": 3, 00:17:29.932 "num_base_bdevs_operational": 3, 00:17:29.932 "base_bdevs_list": [ 00:17:29.932 { 00:17:29.932 "name": null, 00:17:29.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.932 "is_configured": false, 00:17:29.932 "data_offset": 2048, 00:17:29.932 "data_size": 63488 00:17:29.932 }, 00:17:29.932 { 00:17:29.932 "name": "BaseBdev2", 00:17:29.932 "uuid": "23d0f60e-27c6-4f1f-b7b1-1b5cac96e3cc", 00:17:29.932 "is_configured": true, 00:17:29.932 "data_offset": 2048, 00:17:29.932 "data_size": 63488 00:17:29.932 }, 00:17:29.932 { 00:17:29.932 "name": "BaseBdev3", 00:17:29.932 "uuid": "5d41ecc4-9a41-446f-bef1-13f83a19655d", 00:17:29.932 "is_configured": true, 00:17:29.932 "data_offset": 2048, 00:17:29.932 "data_size": 63488 00:17:29.932 }, 00:17:29.932 { 00:17:29.932 "name": "BaseBdev4", 00:17:29.932 "uuid": "82a15a4a-d975-459b-bac8-f27b8614bdfd", 00:17:29.932 "is_configured": true, 00:17:29.932 "data_offset": 2048, 00:17:29.932 "data_size": 63488 00:17:29.932 } 00:17:29.932 ] 00:17:29.932 }' 00:17:29.932 20:59:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.932 20:59:58 -- common/autotest_common.sh@10 -- # set +x 00:17:30.868 20:59:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:30.868 20:59:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:30.868 20:59:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.868 20:59:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:30.868 20:59:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:30.868 20:59:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.868 20:59:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:31.127 [2024-06-09 20:59:59.151057] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.127 20:59:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:31.127 20:59:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:31.127 20:59:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.127 20:59:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:31.385 20:59:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:31.385 20:59:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:31.385 20:59:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:31.644 [2024-06-09 20:59:59.687641] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:31.644 20:59:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:31.644 20:59:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:31.644 20:59:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.644 20:59:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:31.902 21:00:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:31.902 21:00:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:31.902 21:00:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:32.160 [2024-06-09 21:00:00.254027] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:32.160 [2024-06-09 21:00:00.254270] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:32.418 21:00:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:32.418 21:00:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:32.418 21:00:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.418 21:00:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:32.418 21:00:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:32.418 21:00:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:32.418 21:00:00 -- bdev/bdev_raid.sh@287 -- # killprocess 118240 00:17:32.418 21:00:00 -- common/autotest_common.sh@926 -- # '[' -z 118240 ']' 00:17:32.418 21:00:00 -- common/autotest_common.sh@930 -- # kill -0 118240 00:17:32.418 21:00:00 -- common/autotest_common.sh@931 -- # uname 00:17:32.418 21:00:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.418 21:00:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118240 00:17:32.418 killing process with pid 118240 00:17:32.418 21:00:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:32.418 21:00:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:32.418 21:00:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118240' 00:17:32.418 21:00:00 -- common/autotest_common.sh@945 -- # kill 118240 00:17:32.418 21:00:00 -- common/autotest_common.sh@950 -- # wait 118240 00:17:32.418 [2024-06-09 21:00:00.589362] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.418 [2024-06-09 21:00:00.589499] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.792 ************************************ 00:17:33.792 END TEST raid_state_function_test_sb 00:17:33.792 ************************************ 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:33.792 00:17:33.792 real 0m14.842s 00:17:33.792 user 0m26.326s 00:17:33.792 sys 0m1.899s 00:17:33.792 21:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.792 21:00:01 -- common/autotest_common.sh@10 -- # set +x 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:33.792 21:00:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:33.792 21:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:33.792 21:00:01 -- common/autotest_common.sh@10 -- # set +x 00:17:33.792 ************************************ 00:17:33.792 START TEST raid_superblock_test 00:17:33.792 ************************************ 00:17:33.792 21:00:01 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@357 -- # raid_pid=118693 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:33.792 21:00:01 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118693 /var/tmp/spdk-raid.sock 00:17:33.792 21:00:01 -- common/autotest_common.sh@819 -- # '[' -z 118693 ']' 00:17:33.792 21:00:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:33.792 21:00:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.792 21:00:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:33.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:33.792 21:00:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.792 21:00:01 -- common/autotest_common.sh@10 -- # set +x 00:17:33.792 [2024-06-09 21:00:01.655634] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:33.792 [2024-06-09 21:00:01.655998] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118693 ] 00:17:33.792 [2024-06-09 21:00:01.810445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.060 [2024-06-09 21:00:01.990429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.060 [2024-06-09 21:00:02.164690] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.631 21:00:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.631 21:00:02 -- common/autotest_common.sh@852 -- # return 0 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:34.631 21:00:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:34.888 malloc1 00:17:34.888 21:00:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.146 [2024-06-09 21:00:03.157642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.146 [2024-06-09 21:00:03.157950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.146 [2024-06-09 21:00:03.158026] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:35.146 [2024-06-09 21:00:03.158338] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.146 [2024-06-09 21:00:03.160648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.146 [2024-06-09 21:00:03.160838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.146 pt1 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.146 21:00:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:35.404 malloc2 00:17:35.404 21:00:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.661 [2024-06-09 21:00:03.704155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.661 [2024-06-09 21:00:03.704466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.661 [2024-06-09 21:00:03.704552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:35.661 [2024-06-09 21:00:03.704808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.661 [2024-06-09 21:00:03.707252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.661 [2024-06-09 21:00:03.707435] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.661 pt2 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.661 21:00:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:35.919 malloc3 00:17:35.919 21:00:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.177 [2024-06-09 21:00:04.186676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.177 [2024-06-09 21:00:04.186937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.177 [2024-06-09 21:00:04.187028] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:36.177 [2024-06-09 21:00:04.187297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.177 [2024-06-09 21:00:04.189683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.177 [2024-06-09 21:00:04.189885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:36.177 pt3 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.178 21:00:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:36.436 malloc4 00:17:36.436 21:00:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:36.694 [2024-06-09 21:00:04.677719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:36.694 [2024-06-09 21:00:04.678008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.694 [2024-06-09 21:00:04.678180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:36.694 [2024-06-09 21:00:04.678326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.694 [2024-06-09 21:00:04.680978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.694 [2024-06-09 21:00:04.681177] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:36.694 pt4 00:17:36.694 21:00:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:36.694 21:00:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:36.694 21:00:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:36.952 [2024-06-09 21:00:04.873957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.952 [2024-06-09 21:00:04.876056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.952 [2024-06-09 21:00:04.876295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:36.952 [2024-06-09 21:00:04.876417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:36.952 [2024-06-09 21:00:04.876716] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:17:36.952 [2024-06-09 21:00:04.876865] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:36.952 [2024-06-09 21:00:04.877031] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:36.952 [2024-06-09 21:00:04.877513] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:17:36.952 [2024-06-09 21:00:04.877698] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:17:36.952 [2024-06-09 21:00:04.877994] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.952 21:00:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.953 21:00:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.953 21:00:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.953 21:00:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.953 21:00:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.953 "name": "raid_bdev1", 00:17:36.953 "uuid": "f8a9907f-9f33-4800-bdcd-d7bc1804e3e0", 00:17:36.953 "strip_size_kb": 64, 00:17:36.953 "state": "online", 00:17:36.953 "raid_level": "raid0", 00:17:36.953 "superblock": true, 00:17:36.953 "num_base_bdevs": 4, 00:17:36.953 "num_base_bdevs_discovered": 4, 00:17:36.953 "num_base_bdevs_operational": 4, 00:17:36.953 "base_bdevs_list": [ 00:17:36.953 { 00:17:36.953 "name": "pt1", 00:17:36.953 "uuid": "d12fae83-8827-5a29-abd5-59b64600610b", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 }, 00:17:36.953 { 00:17:36.953 "name": "pt2", 00:17:36.953 "uuid": "5f41a70b-7c55-5421-a759-ea75e7d9b365", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 }, 00:17:36.953 { 00:17:36.953 "name": "pt3", 00:17:36.953 "uuid": "9d83e5ae-7581-57a0-88e9-2ae623bb4f22", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 }, 00:17:36.953 { 00:17:36.953 "name": "pt4", 00:17:36.953 "uuid": "727262cf-3791-56d3-a992-3ab7a659ff50", 00:17:36.953 "is_configured": true, 00:17:36.953 "data_offset": 2048, 00:17:36.953 "data_size": 63488 00:17:36.953 } 00:17:36.953 ] 00:17:36.953 }' 00:17:36.953 21:00:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.953 21:00:05 -- common/autotest_common.sh@10 -- # set +x 00:17:37.531 21:00:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.531 21:00:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:37.804 [2024-06-09 21:00:05.926420] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.804 21:00:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f8a9907f-9f33-4800-bdcd-d7bc1804e3e0 00:17:37.804 21:00:05 -- bdev/bdev_raid.sh@380 -- # '[' -z f8a9907f-9f33-4800-bdcd-d7bc1804e3e0 ']' 00:17:37.804 21:00:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:38.063 [2024-06-09 21:00:06.130260] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.063 [2024-06-09 21:00:06.130415] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.063 [2024-06-09 21:00:06.130595] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.063 [2024-06-09 21:00:06.130887] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.063 [2024-06-09 21:00:06.131015] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:17:38.063 21:00:06 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.063 21:00:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:38.321 21:00:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:38.321 21:00:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:38.321 21:00:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.321 21:00:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:38.580 21:00:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.580 21:00:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:38.838 21:00:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.838 21:00:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:39.097 21:00:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:39.097 21:00:07 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:39.097 21:00:07 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:39.097 21:00:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:39.356 21:00:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:39.356 21:00:07 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:39.356 21:00:07 -- common/autotest_common.sh@640 -- # local es=0 00:17:39.356 21:00:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:39.356 21:00:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.356 21:00:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.356 21:00:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.356 21:00:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.356 21:00:07 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.356 21:00:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.356 21:00:07 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.356 21:00:07 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:39.356 21:00:07 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:39.615 [2024-06-09 21:00:07.670459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:39.615 [2024-06-09 21:00:07.672437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:39.615 [2024-06-09 21:00:07.672636] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:39.615 [2024-06-09 21:00:07.672727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:39.615 [2024-06-09 21:00:07.672919] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:39.615 [2024-06-09 21:00:07.673116] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:39.615 [2024-06-09 21:00:07.673265] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:39.615 [2024-06-09 21:00:07.673440] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:39.615 [2024-06-09 21:00:07.673503] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.615 [2024-06-09 21:00:07.673592] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:17:39.615 request: 00:17:39.615 { 00:17:39.615 "name": "raid_bdev1", 00:17:39.615 "raid_level": "raid0", 00:17:39.615 "base_bdevs": [ 00:17:39.615 "malloc1", 00:17:39.615 "malloc2", 00:17:39.615 "malloc3", 00:17:39.615 "malloc4" 00:17:39.615 ], 00:17:39.615 "superblock": false, 00:17:39.615 "strip_size_kb": 64, 00:17:39.615 "method": "bdev_raid_create", 00:17:39.615 "req_id": 1 00:17:39.615 } 00:17:39.615 Got JSON-RPC error response 00:17:39.615 response: 00:17:39.615 { 00:17:39.615 "code": -17, 00:17:39.615 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:39.615 } 00:17:39.615 21:00:07 -- common/autotest_common.sh@643 -- # es=1 00:17:39.615 21:00:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:39.615 21:00:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:39.615 21:00:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:39.615 21:00:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.615 21:00:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:39.873 21:00:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:39.873 21:00:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:39.873 21:00:07 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.130 [2024-06-09 21:00:08.138555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.130 [2024-06-09 21:00:08.138926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.130 [2024-06-09 21:00:08.139024] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:40.130 [2024-06-09 21:00:08.139285] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.130 [2024-06-09 21:00:08.141743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.130 [2024-06-09 21:00:08.141978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.130 [2024-06-09 21:00:08.142244] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:40.130 [2024-06-09 21:00:08.142413] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.130 pt1 00:17:40.130 21:00:08 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:40.130 21:00:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.131 21:00:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.388 21:00:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.388 "name": "raid_bdev1", 00:17:40.388 "uuid": "f8a9907f-9f33-4800-bdcd-d7bc1804e3e0", 00:17:40.388 "strip_size_kb": 64, 00:17:40.388 "state": "configuring", 00:17:40.388 "raid_level": "raid0", 00:17:40.389 "superblock": true, 00:17:40.389 "num_base_bdevs": 4, 00:17:40.389 "num_base_bdevs_discovered": 1, 00:17:40.389 "num_base_bdevs_operational": 4, 00:17:40.389 "base_bdevs_list": [ 00:17:40.389 { 00:17:40.389 "name": "pt1", 00:17:40.389 "uuid": "d12fae83-8827-5a29-abd5-59b64600610b", 00:17:40.389 "is_configured": true, 00:17:40.389 "data_offset": 2048, 00:17:40.389 "data_size": 63488 00:17:40.389 }, 00:17:40.389 { 00:17:40.389 "name": null, 00:17:40.389 "uuid": "5f41a70b-7c55-5421-a759-ea75e7d9b365", 00:17:40.389 "is_configured": false, 00:17:40.389 "data_offset": 2048, 00:17:40.389 "data_size": 63488 00:17:40.389 }, 00:17:40.389 { 00:17:40.389 "name": null, 00:17:40.389 "uuid": "9d83e5ae-7581-57a0-88e9-2ae623bb4f22", 00:17:40.389 "is_configured": false, 00:17:40.389 "data_offset": 2048, 00:17:40.389 "data_size": 63488 00:17:40.389 }, 00:17:40.389 { 00:17:40.389 "name": null, 00:17:40.389 "uuid": "727262cf-3791-56d3-a992-3ab7a659ff50", 00:17:40.389 "is_configured": false, 00:17:40.389 "data_offset": 2048, 00:17:40.389 "data_size": 63488 00:17:40.389 } 00:17:40.389 ] 00:17:40.389 }' 00:17:40.389 21:00:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.389 21:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:40.961 21:00:09 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:40.961 21:00:09 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.219 [2024-06-09 21:00:09.186982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.219 [2024-06-09 21:00:09.187295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.219 [2024-06-09 21:00:09.187381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:41.219 [2024-06-09 21:00:09.187646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.219 [2024-06-09 21:00:09.188284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.219 [2024-06-09 21:00:09.188499] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.219 [2024-06-09 21:00:09.188737] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:41.219 [2024-06-09 21:00:09.188862] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.219 pt2 00:17:41.219 21:00:09 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:41.478 [2024-06-09 21:00:09.427014] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.478 21:00:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.736 21:00:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.736 "name": "raid_bdev1", 00:17:41.736 "uuid": "f8a9907f-9f33-4800-bdcd-d7bc1804e3e0", 00:17:41.736 "strip_size_kb": 64, 00:17:41.736 "state": "configuring", 00:17:41.736 "raid_level": "raid0", 00:17:41.736 "superblock": true, 00:17:41.736 "num_base_bdevs": 4, 00:17:41.736 "num_base_bdevs_discovered": 1, 00:17:41.736 "num_base_bdevs_operational": 4, 00:17:41.736 "base_bdevs_list": [ 00:17:41.736 { 00:17:41.736 "name": "pt1", 00:17:41.736 "uuid": "d12fae83-8827-5a29-abd5-59b64600610b", 00:17:41.736 "is_configured": true, 00:17:41.736 "data_offset": 2048, 00:17:41.736 "data_size": 63488 00:17:41.736 }, 00:17:41.736 { 00:17:41.736 "name": null, 00:17:41.736 "uuid": "5f41a70b-7c55-5421-a759-ea75e7d9b365", 00:17:41.736 "is_configured": false, 00:17:41.736 "data_offset": 2048, 00:17:41.736 "data_size": 63488 00:17:41.736 }, 00:17:41.736 { 00:17:41.736 "name": null, 00:17:41.736 "uuid": "9d83e5ae-7581-57a0-88e9-2ae623bb4f22", 00:17:41.736 "is_configured": false, 00:17:41.736 "data_offset": 2048, 00:17:41.736 "data_size": 63488 00:17:41.736 }, 00:17:41.736 { 00:17:41.736 "name": null, 00:17:41.736 "uuid": "727262cf-3791-56d3-a992-3ab7a659ff50", 00:17:41.736 "is_configured": false, 00:17:41.736 "data_offset": 2048, 00:17:41.736 "data_size": 63488 00:17:41.736 } 00:17:41.736 ] 00:17:41.736 }' 00:17:41.736 21:00:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.736 21:00:09 -- common/autotest_common.sh@10 -- # set +x 00:17:42.302 21:00:10 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:42.302 21:00:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:42.302 21:00:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.560 [2024-06-09 21:00:10.543320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.560 [2024-06-09 21:00:10.543632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.560 [2024-06-09 21:00:10.543721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:42.560 [2024-06-09 21:00:10.543966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.560 [2024-06-09 21:00:10.544562] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.560 [2024-06-09 21:00:10.544791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.560 [2024-06-09 21:00:10.545033] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:42.561 [2024-06-09 21:00:10.545188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.561 pt2 00:17:42.561 21:00:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:42.561 21:00:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:42.561 21:00:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:42.819 [2024-06-09 21:00:10.779342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:42.819 [2024-06-09 21:00:10.779551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.819 [2024-06-09 21:00:10.779619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:42.819 [2024-06-09 21:00:10.779744] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.819 [2024-06-09 21:00:10.780274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.819 [2024-06-09 21:00:10.780476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:42.819 [2024-06-09 21:00:10.780702] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:42.819 [2024-06-09 21:00:10.780832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:42.819 pt3 00:17:42.819 21:00:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:42.819 21:00:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:42.819 21:00:10 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:43.078 [2024-06-09 21:00:11.023415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:43.078 [2024-06-09 21:00:11.023690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.078 [2024-06-09 21:00:11.023778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:43.078 [2024-06-09 21:00:11.024035] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.078 [2024-06-09 21:00:11.024533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.078 [2024-06-09 21:00:11.024738] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:43.078 [2024-06-09 21:00:11.024973] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:43.078 [2024-06-09 21:00:11.025107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:43.078 [2024-06-09 21:00:11.025292] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:17:43.078 [2024-06-09 21:00:11.025402] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:43.078 [2024-06-09 21:00:11.025659] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:43.078 [2024-06-09 21:00:11.026146] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:17:43.078 [2024-06-09 21:00:11.026266] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:17:43.078 [2024-06-09 21:00:11.026493] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.078 pt4 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.078 21:00:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.336 21:00:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.336 "name": "raid_bdev1", 00:17:43.336 "uuid": "f8a9907f-9f33-4800-bdcd-d7bc1804e3e0", 00:17:43.336 "strip_size_kb": 64, 00:17:43.336 "state": "online", 00:17:43.336 "raid_level": "raid0", 00:17:43.336 "superblock": true, 00:17:43.336 "num_base_bdevs": 4, 00:17:43.336 "num_base_bdevs_discovered": 4, 00:17:43.336 "num_base_bdevs_operational": 4, 00:17:43.336 "base_bdevs_list": [ 00:17:43.336 { 00:17:43.336 "name": "pt1", 00:17:43.336 "uuid": "d12fae83-8827-5a29-abd5-59b64600610b", 00:17:43.336 "is_configured": true, 00:17:43.336 "data_offset": 2048, 00:17:43.336 "data_size": 63488 00:17:43.336 }, 00:17:43.336 { 00:17:43.337 "name": "pt2", 00:17:43.337 "uuid": "5f41a70b-7c55-5421-a759-ea75e7d9b365", 00:17:43.337 "is_configured": true, 00:17:43.337 "data_offset": 2048, 00:17:43.337 "data_size": 63488 00:17:43.337 }, 00:17:43.337 { 00:17:43.337 "name": "pt3", 00:17:43.337 "uuid": "9d83e5ae-7581-57a0-88e9-2ae623bb4f22", 00:17:43.337 "is_configured": true, 00:17:43.337 "data_offset": 2048, 00:17:43.337 "data_size": 63488 00:17:43.337 }, 00:17:43.337 { 00:17:43.337 "name": "pt4", 00:17:43.337 "uuid": "727262cf-3791-56d3-a992-3ab7a659ff50", 00:17:43.337 "is_configured": true, 00:17:43.337 "data_offset": 2048, 00:17:43.337 "data_size": 63488 00:17:43.337 } 00:17:43.337 ] 00:17:43.337 }' 00:17:43.337 21:00:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.337 21:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:43.903 21:00:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:43.903 21:00:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:44.160 [2024-06-09 21:00:12.149435] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.160 21:00:12 -- bdev/bdev_raid.sh@430 -- # '[' f8a9907f-9f33-4800-bdcd-d7bc1804e3e0 '!=' f8a9907f-9f33-4800-bdcd-d7bc1804e3e0 ']' 00:17:44.160 21:00:12 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:44.160 21:00:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:44.160 21:00:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:44.160 21:00:12 -- bdev/bdev_raid.sh@511 -- # killprocess 118693 00:17:44.160 21:00:12 -- common/autotest_common.sh@926 -- # '[' -z 118693 ']' 00:17:44.160 21:00:12 -- common/autotest_common.sh@930 -- # kill -0 118693 00:17:44.160 21:00:12 -- common/autotest_common.sh@931 -- # uname 00:17:44.160 21:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:44.160 21:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118693 00:17:44.160 21:00:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:44.160 21:00:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:44.160 21:00:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118693' 00:17:44.160 killing process with pid 118693 00:17:44.160 21:00:12 -- common/autotest_common.sh@945 -- # kill 118693 00:17:44.160 [2024-06-09 21:00:12.193679] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.160 21:00:12 -- common/autotest_common.sh@950 -- # wait 118693 00:17:44.160 [2024-06-09 21:00:12.193928] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.160 [2024-06-09 21:00:12.194054] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.160 [2024-06-09 21:00:12.194171] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:17:44.418 [2024-06-09 21:00:12.459201] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.354 ************************************ 00:17:45.354 END TEST raid_superblock_test 00:17:45.354 ************************************ 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:45.354 00:17:45.354 real 0m11.820s 00:17:45.354 user 0m20.648s 00:17:45.354 sys 0m1.339s 00:17:45.354 21:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.354 21:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:45.354 21:00:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:45.354 21:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:45.354 21:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:45.354 ************************************ 00:17:45.354 START TEST raid_state_function_test 00:17:45.354 ************************************ 00:17:45.354 21:00:13 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=119013 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:45.354 Process raid pid: 119013 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119013' 00:17:45.354 21:00:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119013 /var/tmp/spdk-raid.sock 00:17:45.354 21:00:13 -- common/autotest_common.sh@819 -- # '[' -z 119013 ']' 00:17:45.354 21:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.354 21:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:45.354 21:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.354 21:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:45.355 21:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:45.613 [2024-06-09 21:00:13.553577] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:45.613 [2024-06-09 21:00:13.553963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.613 [2024-06-09 21:00:13.720961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.871 [2024-06-09 21:00:13.902435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.129 [2024-06-09 21:00:14.080488] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.388 21:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:46.388 21:00:14 -- common/autotest_common.sh@852 -- # return 0 00:17:46.388 21:00:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:46.646 [2024-06-09 21:00:14.742220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.646 [2024-06-09 21:00:14.742451] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.646 [2024-06-09 21:00:14.742574] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.646 [2024-06-09 21:00:14.742721] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.646 [2024-06-09 21:00:14.742829] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.646 [2024-06-09 21:00:14.742945] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.646 [2024-06-09 21:00:14.743136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:46.646 [2024-06-09 21:00:14.743212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.646 21:00:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.904 21:00:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.904 "name": "Existed_Raid", 00:17:46.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.904 "strip_size_kb": 64, 00:17:46.904 "state": "configuring", 00:17:46.904 "raid_level": "concat", 00:17:46.905 "superblock": false, 00:17:46.905 "num_base_bdevs": 4, 00:17:46.905 "num_base_bdevs_discovered": 0, 00:17:46.905 "num_base_bdevs_operational": 4, 00:17:46.905 "base_bdevs_list": [ 00:17:46.905 { 00:17:46.905 "name": "BaseBdev1", 00:17:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.905 "is_configured": false, 00:17:46.905 "data_offset": 0, 00:17:46.905 "data_size": 0 00:17:46.905 }, 00:17:46.905 { 00:17:46.905 "name": "BaseBdev2", 00:17:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.905 "is_configured": false, 00:17:46.905 "data_offset": 0, 00:17:46.905 "data_size": 0 00:17:46.905 }, 00:17:46.905 { 00:17:46.905 "name": "BaseBdev3", 00:17:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.905 "is_configured": false, 00:17:46.905 "data_offset": 0, 00:17:46.905 "data_size": 0 00:17:46.905 }, 00:17:46.905 { 00:17:46.905 "name": "BaseBdev4", 00:17:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.905 "is_configured": false, 00:17:46.905 "data_offset": 0, 00:17:46.905 "data_size": 0 00:17:46.905 } 00:17:46.905 ] 00:17:46.905 }' 00:17:46.905 21:00:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.905 21:00:15 -- common/autotest_common.sh@10 -- # set +x 00:17:47.470 21:00:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:47.729 [2024-06-09 21:00:15.846309] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.729 [2024-06-09 21:00:15.846513] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:47.729 21:00:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:47.987 [2024-06-09 21:00:16.038370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.987 [2024-06-09 21:00:16.038603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.987 [2024-06-09 21:00:16.038743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.987 [2024-06-09 21:00:16.038815] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.987 [2024-06-09 21:00:16.039040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.987 [2024-06-09 21:00:16.039125] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.987 [2024-06-09 21:00:16.039177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:47.987 [2024-06-09 21:00:16.039417] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:47.987 21:00:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:48.245 [2024-06-09 21:00:16.321541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.245 BaseBdev1 00:17:48.245 21:00:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:48.245 21:00:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:48.245 21:00:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:48.245 21:00:16 -- common/autotest_common.sh@889 -- # local i 00:17:48.245 21:00:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:48.245 21:00:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:48.245 21:00:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:48.504 21:00:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:48.763 [ 00:17:48.763 { 00:17:48.763 "name": "BaseBdev1", 00:17:48.763 "aliases": [ 00:17:48.763 "ae80ab78-703b-45cc-9bce-dfdedf59adff" 00:17:48.763 ], 00:17:48.763 "product_name": "Malloc disk", 00:17:48.763 "block_size": 512, 00:17:48.763 "num_blocks": 65536, 00:17:48.763 "uuid": "ae80ab78-703b-45cc-9bce-dfdedf59adff", 00:17:48.763 "assigned_rate_limits": { 00:17:48.763 "rw_ios_per_sec": 0, 00:17:48.763 "rw_mbytes_per_sec": 0, 00:17:48.763 "r_mbytes_per_sec": 0, 00:17:48.763 "w_mbytes_per_sec": 0 00:17:48.763 }, 00:17:48.763 "claimed": true, 00:17:48.763 "claim_type": "exclusive_write", 00:17:48.763 "zoned": false, 00:17:48.763 "supported_io_types": { 00:17:48.763 "read": true, 00:17:48.763 "write": true, 00:17:48.763 "unmap": true, 00:17:48.764 "write_zeroes": true, 00:17:48.764 "flush": true, 00:17:48.764 "reset": true, 00:17:48.764 "compare": false, 00:17:48.764 "compare_and_write": false, 00:17:48.764 "abort": true, 00:17:48.764 "nvme_admin": false, 00:17:48.764 "nvme_io": false 00:17:48.764 }, 00:17:48.764 "memory_domains": [ 00:17:48.764 { 00:17:48.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.764 "dma_device_type": 2 00:17:48.764 } 00:17:48.764 ], 00:17:48.764 "driver_specific": {} 00:17:48.764 } 00:17:48.764 ] 00:17:48.764 21:00:16 -- common/autotest_common.sh@895 -- # return 0 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.764 21:00:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.022 21:00:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.022 "name": "Existed_Raid", 00:17:49.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.022 "strip_size_kb": 64, 00:17:49.022 "state": "configuring", 00:17:49.022 "raid_level": "concat", 00:17:49.022 "superblock": false, 00:17:49.022 "num_base_bdevs": 4, 00:17:49.022 "num_base_bdevs_discovered": 1, 00:17:49.022 "num_base_bdevs_operational": 4, 00:17:49.022 "base_bdevs_list": [ 00:17:49.022 { 00:17:49.022 "name": "BaseBdev1", 00:17:49.022 "uuid": "ae80ab78-703b-45cc-9bce-dfdedf59adff", 00:17:49.022 "is_configured": true, 00:17:49.022 "data_offset": 0, 00:17:49.022 "data_size": 65536 00:17:49.022 }, 00:17:49.022 { 00:17:49.022 "name": "BaseBdev2", 00:17:49.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.022 "is_configured": false, 00:17:49.022 "data_offset": 0, 00:17:49.022 "data_size": 0 00:17:49.022 }, 00:17:49.022 { 00:17:49.022 "name": "BaseBdev3", 00:17:49.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.022 "is_configured": false, 00:17:49.022 "data_offset": 0, 00:17:49.022 "data_size": 0 00:17:49.022 }, 00:17:49.022 { 00:17:49.022 "name": "BaseBdev4", 00:17:49.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.022 "is_configured": false, 00:17:49.022 "data_offset": 0, 00:17:49.022 "data_size": 0 00:17:49.022 } 00:17:49.022 ] 00:17:49.022 }' 00:17:49.022 21:00:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.022 21:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:49.588 21:00:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.846 [2024-06-09 21:00:17.797920] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.846 [2024-06-09 21:00:17.798166] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:49.846 21:00:17 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:49.846 21:00:17 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:49.846 [2024-06-09 21:00:17.997987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.846 [2024-06-09 21:00:17.999942] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.846 [2024-06-09 21:00:18.000152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.846 [2024-06-09 21:00:18.000270] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.846 [2024-06-09 21:00:18.000405] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.846 [2024-06-09 21:00:18.000505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:49.846 [2024-06-09 21:00:18.000562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.846 21:00:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.105 21:00:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.105 "name": "Existed_Raid", 00:17:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.105 "strip_size_kb": 64, 00:17:50.105 "state": "configuring", 00:17:50.105 "raid_level": "concat", 00:17:50.105 "superblock": false, 00:17:50.105 "num_base_bdevs": 4, 00:17:50.105 "num_base_bdevs_discovered": 1, 00:17:50.105 "num_base_bdevs_operational": 4, 00:17:50.105 "base_bdevs_list": [ 00:17:50.105 { 00:17:50.105 "name": "BaseBdev1", 00:17:50.105 "uuid": "ae80ab78-703b-45cc-9bce-dfdedf59adff", 00:17:50.105 "is_configured": true, 00:17:50.105 "data_offset": 0, 00:17:50.105 "data_size": 65536 00:17:50.105 }, 00:17:50.105 { 00:17:50.105 "name": "BaseBdev2", 00:17:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.105 "is_configured": false, 00:17:50.105 "data_offset": 0, 00:17:50.105 "data_size": 0 00:17:50.105 }, 00:17:50.105 { 00:17:50.105 "name": "BaseBdev3", 00:17:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.105 "is_configured": false, 00:17:50.105 "data_offset": 0, 00:17:50.105 "data_size": 0 00:17:50.105 }, 00:17:50.105 { 00:17:50.105 "name": "BaseBdev4", 00:17:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.105 "is_configured": false, 00:17:50.105 "data_offset": 0, 00:17:50.105 "data_size": 0 00:17:50.105 } 00:17:50.105 ] 00:17:50.105 }' 00:17:50.105 21:00:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.105 21:00:18 -- common/autotest_common.sh@10 -- # set +x 00:17:51.041 21:00:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:51.041 [2024-06-09 21:00:19.158113] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.041 BaseBdev2 00:17:51.041 21:00:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:51.041 21:00:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:51.041 21:00:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:51.041 21:00:19 -- common/autotest_common.sh@889 -- # local i 00:17:51.041 21:00:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:51.041 21:00:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:51.041 21:00:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.299 21:00:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.558 [ 00:17:51.559 { 00:17:51.559 "name": "BaseBdev2", 00:17:51.559 "aliases": [ 00:17:51.559 "ff32dbe5-9504-4f83-a8c3-e7fdf6e3ffd8" 00:17:51.559 ], 00:17:51.559 "product_name": "Malloc disk", 00:17:51.559 "block_size": 512, 00:17:51.559 "num_blocks": 65536, 00:17:51.559 "uuid": "ff32dbe5-9504-4f83-a8c3-e7fdf6e3ffd8", 00:17:51.559 "assigned_rate_limits": { 00:17:51.559 "rw_ios_per_sec": 0, 00:17:51.559 "rw_mbytes_per_sec": 0, 00:17:51.559 "r_mbytes_per_sec": 0, 00:17:51.559 "w_mbytes_per_sec": 0 00:17:51.559 }, 00:17:51.559 "claimed": true, 00:17:51.559 "claim_type": "exclusive_write", 00:17:51.559 "zoned": false, 00:17:51.559 "supported_io_types": { 00:17:51.559 "read": true, 00:17:51.559 "write": true, 00:17:51.559 "unmap": true, 00:17:51.559 "write_zeroes": true, 00:17:51.559 "flush": true, 00:17:51.559 "reset": true, 00:17:51.559 "compare": false, 00:17:51.559 "compare_and_write": false, 00:17:51.559 "abort": true, 00:17:51.559 "nvme_admin": false, 00:17:51.559 "nvme_io": false 00:17:51.559 }, 00:17:51.559 "memory_domains": [ 00:17:51.559 { 00:17:51.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.559 "dma_device_type": 2 00:17:51.559 } 00:17:51.559 ], 00:17:51.559 "driver_specific": {} 00:17:51.559 } 00:17:51.559 ] 00:17:51.559 21:00:19 -- common/autotest_common.sh@895 -- # return 0 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.559 21:00:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.817 21:00:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.817 "name": "Existed_Raid", 00:17:51.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.817 "strip_size_kb": 64, 00:17:51.817 "state": "configuring", 00:17:51.817 "raid_level": "concat", 00:17:51.817 "superblock": false, 00:17:51.817 "num_base_bdevs": 4, 00:17:51.817 "num_base_bdevs_discovered": 2, 00:17:51.817 "num_base_bdevs_operational": 4, 00:17:51.817 "base_bdevs_list": [ 00:17:51.817 { 00:17:51.817 "name": "BaseBdev1", 00:17:51.817 "uuid": "ae80ab78-703b-45cc-9bce-dfdedf59adff", 00:17:51.817 "is_configured": true, 00:17:51.817 "data_offset": 0, 00:17:51.817 "data_size": 65536 00:17:51.817 }, 00:17:51.817 { 00:17:51.817 "name": "BaseBdev2", 00:17:51.818 "uuid": "ff32dbe5-9504-4f83-a8c3-e7fdf6e3ffd8", 00:17:51.818 "is_configured": true, 00:17:51.818 "data_offset": 0, 00:17:51.818 "data_size": 65536 00:17:51.818 }, 00:17:51.818 { 00:17:51.818 "name": "BaseBdev3", 00:17:51.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.818 "is_configured": false, 00:17:51.818 "data_offset": 0, 00:17:51.818 "data_size": 0 00:17:51.818 }, 00:17:51.818 { 00:17:51.818 "name": "BaseBdev4", 00:17:51.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.818 "is_configured": false, 00:17:51.818 "data_offset": 0, 00:17:51.818 "data_size": 0 00:17:51.818 } 00:17:51.818 ] 00:17:51.818 }' 00:17:51.818 21:00:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.818 21:00:19 -- common/autotest_common.sh@10 -- # set +x 00:17:52.384 21:00:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:52.643 [2024-06-09 21:00:20.728882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:52.643 BaseBdev3 00:17:52.643 21:00:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:52.643 21:00:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:52.643 21:00:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:52.643 21:00:20 -- common/autotest_common.sh@889 -- # local i 00:17:52.643 21:00:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:52.643 21:00:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:52.643 21:00:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.901 21:00:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:53.159 [ 00:17:53.159 { 00:17:53.159 "name": "BaseBdev3", 00:17:53.159 "aliases": [ 00:17:53.159 "8d5f5081-bd72-49e9-91ee-0f700a170f02" 00:17:53.159 ], 00:17:53.159 "product_name": "Malloc disk", 00:17:53.159 "block_size": 512, 00:17:53.159 "num_blocks": 65536, 00:17:53.159 "uuid": "8d5f5081-bd72-49e9-91ee-0f700a170f02", 00:17:53.159 "assigned_rate_limits": { 00:17:53.159 "rw_ios_per_sec": 0, 00:17:53.159 "rw_mbytes_per_sec": 0, 00:17:53.159 "r_mbytes_per_sec": 0, 00:17:53.159 "w_mbytes_per_sec": 0 00:17:53.159 }, 00:17:53.159 "claimed": true, 00:17:53.159 "claim_type": "exclusive_write", 00:17:53.159 "zoned": false, 00:17:53.159 "supported_io_types": { 00:17:53.159 "read": true, 00:17:53.159 "write": true, 00:17:53.159 "unmap": true, 00:17:53.159 "write_zeroes": true, 00:17:53.159 "flush": true, 00:17:53.159 "reset": true, 00:17:53.159 "compare": false, 00:17:53.159 "compare_and_write": false, 00:17:53.159 "abort": true, 00:17:53.159 "nvme_admin": false, 00:17:53.159 "nvme_io": false 00:17:53.159 }, 00:17:53.159 "memory_domains": [ 00:17:53.159 { 00:17:53.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.159 "dma_device_type": 2 00:17:53.159 } 00:17:53.159 ], 00:17:53.159 "driver_specific": {} 00:17:53.159 } 00:17:53.159 ] 00:17:53.159 21:00:21 -- common/autotest_common.sh@895 -- # return 0 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.159 21:00:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.417 21:00:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.417 "name": "Existed_Raid", 00:17:53.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.417 "strip_size_kb": 64, 00:17:53.417 "state": "configuring", 00:17:53.417 "raid_level": "concat", 00:17:53.417 "superblock": false, 00:17:53.417 "num_base_bdevs": 4, 00:17:53.417 "num_base_bdevs_discovered": 3, 00:17:53.417 "num_base_bdevs_operational": 4, 00:17:53.417 "base_bdevs_list": [ 00:17:53.417 { 00:17:53.417 "name": "BaseBdev1", 00:17:53.417 "uuid": "ae80ab78-703b-45cc-9bce-dfdedf59adff", 00:17:53.417 "is_configured": true, 00:17:53.417 "data_offset": 0, 00:17:53.417 "data_size": 65536 00:17:53.417 }, 00:17:53.417 { 00:17:53.417 "name": "BaseBdev2", 00:17:53.417 "uuid": "ff32dbe5-9504-4f83-a8c3-e7fdf6e3ffd8", 00:17:53.417 "is_configured": true, 00:17:53.417 "data_offset": 0, 00:17:53.417 "data_size": 65536 00:17:53.417 }, 00:17:53.417 { 00:17:53.417 "name": "BaseBdev3", 00:17:53.417 "uuid": "8d5f5081-bd72-49e9-91ee-0f700a170f02", 00:17:53.417 "is_configured": true, 00:17:53.417 "data_offset": 0, 00:17:53.417 "data_size": 65536 00:17:53.417 }, 00:17:53.417 { 00:17:53.417 "name": "BaseBdev4", 00:17:53.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.417 "is_configured": false, 00:17:53.417 "data_offset": 0, 00:17:53.417 "data_size": 0 00:17:53.417 } 00:17:53.417 ] 00:17:53.417 }' 00:17:53.417 21:00:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.417 21:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.983 21:00:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:54.240 [2024-06-09 21:00:22.346774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:54.240 [2024-06-09 21:00:22.347035] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:54.240 [2024-06-09 21:00:22.347079] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:54.240 [2024-06-09 21:00:22.347327] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:54.240 [2024-06-09 21:00:22.347798] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:54.240 [2024-06-09 21:00:22.347986] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:54.240 [2024-06-09 21:00:22.348353] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.240 BaseBdev4 00:17:54.240 21:00:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:54.240 21:00:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:54.240 21:00:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:54.240 21:00:22 -- common/autotest_common.sh@889 -- # local i 00:17:54.240 21:00:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:54.240 21:00:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:54.240 21:00:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.497 21:00:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:54.756 [ 00:17:54.756 { 00:17:54.756 "name": "BaseBdev4", 00:17:54.756 "aliases": [ 00:17:54.756 "14899ea2-56c4-4e53-a288-c291b40e5e0c" 00:17:54.756 ], 00:17:54.756 "product_name": "Malloc disk", 00:17:54.756 "block_size": 512, 00:17:54.756 "num_blocks": 65536, 00:17:54.756 "uuid": "14899ea2-56c4-4e53-a288-c291b40e5e0c", 00:17:54.756 "assigned_rate_limits": { 00:17:54.756 "rw_ios_per_sec": 0, 00:17:54.756 "rw_mbytes_per_sec": 0, 00:17:54.756 "r_mbytes_per_sec": 0, 00:17:54.756 "w_mbytes_per_sec": 0 00:17:54.756 }, 00:17:54.756 "claimed": true, 00:17:54.756 "claim_type": "exclusive_write", 00:17:54.756 "zoned": false, 00:17:54.756 "supported_io_types": { 00:17:54.756 "read": true, 00:17:54.756 "write": true, 00:17:54.756 "unmap": true, 00:17:54.756 "write_zeroes": true, 00:17:54.756 "flush": true, 00:17:54.756 "reset": true, 00:17:54.756 "compare": false, 00:17:54.756 "compare_and_write": false, 00:17:54.756 "abort": true, 00:17:54.756 "nvme_admin": false, 00:17:54.756 "nvme_io": false 00:17:54.756 }, 00:17:54.756 "memory_domains": [ 00:17:54.756 { 00:17:54.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.756 "dma_device_type": 2 00:17:54.756 } 00:17:54.756 ], 00:17:54.756 "driver_specific": {} 00:17:54.756 } 00:17:54.756 ] 00:17:54.756 21:00:22 -- common/autotest_common.sh@895 -- # return 0 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.756 21:00:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.015 21:00:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.015 "name": "Existed_Raid", 00:17:55.015 "uuid": "70638a4e-e78c-46ba-ac04-2a1899d9c7e3", 00:17:55.015 "strip_size_kb": 64, 00:17:55.015 "state": "online", 00:17:55.015 "raid_level": "concat", 00:17:55.015 "superblock": false, 00:17:55.015 "num_base_bdevs": 4, 00:17:55.015 "num_base_bdevs_discovered": 4, 00:17:55.015 "num_base_bdevs_operational": 4, 00:17:55.015 "base_bdevs_list": [ 00:17:55.015 { 00:17:55.015 "name": "BaseBdev1", 00:17:55.015 "uuid": "ae80ab78-703b-45cc-9bce-dfdedf59adff", 00:17:55.015 "is_configured": true, 00:17:55.015 "data_offset": 0, 00:17:55.015 "data_size": 65536 00:17:55.015 }, 00:17:55.015 { 00:17:55.015 "name": "BaseBdev2", 00:17:55.015 "uuid": "ff32dbe5-9504-4f83-a8c3-e7fdf6e3ffd8", 00:17:55.015 "is_configured": true, 00:17:55.015 "data_offset": 0, 00:17:55.015 "data_size": 65536 00:17:55.015 }, 00:17:55.015 { 00:17:55.015 "name": "BaseBdev3", 00:17:55.015 "uuid": "8d5f5081-bd72-49e9-91ee-0f700a170f02", 00:17:55.015 "is_configured": true, 00:17:55.015 "data_offset": 0, 00:17:55.015 "data_size": 65536 00:17:55.015 }, 00:17:55.015 { 00:17:55.015 "name": "BaseBdev4", 00:17:55.015 "uuid": "14899ea2-56c4-4e53-a288-c291b40e5e0c", 00:17:55.015 "is_configured": true, 00:17:55.015 "data_offset": 0, 00:17:55.015 "data_size": 65536 00:17:55.015 } 00:17:55.015 ] 00:17:55.015 }' 00:17:55.015 21:00:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.015 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:17:55.582 21:00:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:55.840 [2024-06-09 21:00:23.903299] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.840 [2024-06-09 21:00:23.903506] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.840 [2024-06-09 21:00:23.903700] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.840 21:00:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.099 21:00:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.099 "name": "Existed_Raid", 00:17:56.099 "uuid": "70638a4e-e78c-46ba-ac04-2a1899d9c7e3", 00:17:56.099 "strip_size_kb": 64, 00:17:56.099 "state": "offline", 00:17:56.099 "raid_level": "concat", 00:17:56.099 "superblock": false, 00:17:56.099 "num_base_bdevs": 4, 00:17:56.099 "num_base_bdevs_discovered": 3, 00:17:56.099 "num_base_bdevs_operational": 3, 00:17:56.099 "base_bdevs_list": [ 00:17:56.099 { 00:17:56.099 "name": null, 00:17:56.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.099 "is_configured": false, 00:17:56.099 "data_offset": 0, 00:17:56.099 "data_size": 65536 00:17:56.099 }, 00:17:56.099 { 00:17:56.099 "name": "BaseBdev2", 00:17:56.099 "uuid": "ff32dbe5-9504-4f83-a8c3-e7fdf6e3ffd8", 00:17:56.099 "is_configured": true, 00:17:56.099 "data_offset": 0, 00:17:56.099 "data_size": 65536 00:17:56.099 }, 00:17:56.099 { 00:17:56.099 "name": "BaseBdev3", 00:17:56.099 "uuid": "8d5f5081-bd72-49e9-91ee-0f700a170f02", 00:17:56.099 "is_configured": true, 00:17:56.099 "data_offset": 0, 00:17:56.099 "data_size": 65536 00:17:56.099 }, 00:17:56.099 { 00:17:56.099 "name": "BaseBdev4", 00:17:56.099 "uuid": "14899ea2-56c4-4e53-a288-c291b40e5e0c", 00:17:56.099 "is_configured": true, 00:17:56.099 "data_offset": 0, 00:17:56.099 "data_size": 65536 00:17:56.099 } 00:17:56.099 ] 00:17:56.099 }' 00:17:56.099 21:00:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.099 21:00:24 -- common/autotest_common.sh@10 -- # set +x 00:17:56.691 21:00:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:56.691 21:00:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:56.691 21:00:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.691 21:00:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:56.950 21:00:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:56.950 21:00:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:56.950 21:00:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:57.208 [2024-06-09 21:00:25.352158] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.467 21:00:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:57.467 21:00:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:57.467 21:00:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.467 21:00:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:57.467 21:00:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:57.467 21:00:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:57.467 21:00:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:57.726 [2024-06-09 21:00:25.891557] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:57.984 21:00:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:57.984 21:00:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:57.984 21:00:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.984 21:00:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:58.242 21:00:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:58.242 21:00:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:58.242 21:00:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:58.242 [2024-06-09 21:00:26.409889] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:58.242 [2024-06-09 21:00:26.410140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:58.500 21:00:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:58.500 21:00:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:58.500 21:00:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.500 21:00:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:58.759 21:00:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:58.759 21:00:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:58.759 21:00:26 -- bdev/bdev_raid.sh@287 -- # killprocess 119013 00:17:58.759 21:00:26 -- common/autotest_common.sh@926 -- # '[' -z 119013 ']' 00:17:58.759 21:00:26 -- common/autotest_common.sh@930 -- # kill -0 119013 00:17:58.759 21:00:26 -- common/autotest_common.sh@931 -- # uname 00:17:58.759 21:00:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:58.759 21:00:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119013 00:17:58.759 21:00:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:58.759 21:00:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:58.759 21:00:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119013' 00:17:58.759 killing process with pid 119013 00:17:58.759 21:00:26 -- common/autotest_common.sh@945 -- # kill 119013 00:17:58.759 [2024-06-09 21:00:26.763505] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.759 21:00:26 -- common/autotest_common.sh@950 -- # wait 119013 00:17:58.759 [2024-06-09 21:00:26.763744] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.694 ************************************ 00:17:59.694 END TEST raid_state_function_test 00:17:59.694 ************************************ 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:59.694 00:17:59.694 real 0m14.221s 00:17:59.694 user 0m25.490s 00:17:59.694 sys 0m1.598s 00:17:59.694 21:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.694 21:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:59.694 21:00:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:59.694 21:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:59.694 21:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:59.694 ************************************ 00:17:59.694 START TEST raid_state_function_test_sb 00:17:59.694 ************************************ 00:17:59.694 21:00:27 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:59.694 21:00:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:59.695 21:00:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=119454 00:17:59.695 21:00:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:59.695 21:00:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119454' 00:17:59.695 Process raid pid: 119454 00:17:59.695 21:00:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119454 /var/tmp/spdk-raid.sock 00:17:59.695 21:00:27 -- common/autotest_common.sh@819 -- # '[' -z 119454 ']' 00:17:59.695 21:00:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:59.695 21:00:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:59.695 21:00:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:59.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:59.695 21:00:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:59.695 21:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:59.695 [2024-06-09 21:00:27.819244] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:59.695 [2024-06-09 21:00:27.819620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.953 [2024-06-09 21:00:27.992339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.212 [2024-06-09 21:00:28.234301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.471 [2024-06-09 21:00:28.403351] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.730 21:00:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:00.730 21:00:28 -- common/autotest_common.sh@852 -- # return 0 00:18:00.730 21:00:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:00.989 [2024-06-09 21:00:28.943042] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.989 [2024-06-09 21:00:28.943357] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.989 [2024-06-09 21:00:28.943497] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:00.989 [2024-06-09 21:00:28.943563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:00.989 [2024-06-09 21:00:28.943714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:00.989 [2024-06-09 21:00:28.943838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:00.989 [2024-06-09 21:00:28.944016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:00.989 [2024-06-09 21:00:28.944126] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.989 21:00:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.248 21:00:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.248 "name": "Existed_Raid", 00:18:01.248 "uuid": "f5971a35-2556-48ef-9156-51abe2761e64", 00:18:01.248 "strip_size_kb": 64, 00:18:01.248 "state": "configuring", 00:18:01.248 "raid_level": "concat", 00:18:01.248 "superblock": true, 00:18:01.248 "num_base_bdevs": 4, 00:18:01.248 "num_base_bdevs_discovered": 0, 00:18:01.248 "num_base_bdevs_operational": 4, 00:18:01.248 "base_bdevs_list": [ 00:18:01.248 { 00:18:01.248 "name": "BaseBdev1", 00:18:01.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.248 "is_configured": false, 00:18:01.248 "data_offset": 0, 00:18:01.248 "data_size": 0 00:18:01.248 }, 00:18:01.248 { 00:18:01.248 "name": "BaseBdev2", 00:18:01.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.248 "is_configured": false, 00:18:01.248 "data_offset": 0, 00:18:01.248 "data_size": 0 00:18:01.248 }, 00:18:01.248 { 00:18:01.248 "name": "BaseBdev3", 00:18:01.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.248 "is_configured": false, 00:18:01.248 "data_offset": 0, 00:18:01.248 "data_size": 0 00:18:01.248 }, 00:18:01.248 { 00:18:01.248 "name": "BaseBdev4", 00:18:01.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.248 "is_configured": false, 00:18:01.248 "data_offset": 0, 00:18:01.248 "data_size": 0 00:18:01.248 } 00:18:01.248 ] 00:18:01.248 }' 00:18:01.248 21:00:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.248 21:00:29 -- common/autotest_common.sh@10 -- # set +x 00:18:01.815 21:00:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:02.074 [2024-06-09 21:00:30.027372] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:02.074 [2024-06-09 21:00:30.027637] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:02.074 21:00:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:02.333 [2024-06-09 21:00:30.279478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:02.333 [2024-06-09 21:00:30.279720] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:02.334 [2024-06-09 21:00:30.279837] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:02.334 [2024-06-09 21:00:30.280009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:02.334 [2024-06-09 21:00:30.280122] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:02.334 [2024-06-09 21:00:30.280202] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:02.334 [2024-06-09 21:00:30.280307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:02.334 [2024-06-09 21:00:30.280372] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:02.334 21:00:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:02.592 [2024-06-09 21:00:30.568876] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:02.592 BaseBdev1 00:18:02.592 21:00:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:02.592 21:00:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:02.592 21:00:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:02.592 21:00:30 -- common/autotest_common.sh@889 -- # local i 00:18:02.592 21:00:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:02.592 21:00:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:02.592 21:00:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:02.851 21:00:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:02.851 [ 00:18:02.851 { 00:18:02.851 "name": "BaseBdev1", 00:18:02.851 "aliases": [ 00:18:02.851 "33fffb0e-22eb-4b2e-af1a-a6445fe79b2f" 00:18:02.851 ], 00:18:02.851 "product_name": "Malloc disk", 00:18:02.851 "block_size": 512, 00:18:02.851 "num_blocks": 65536, 00:18:02.851 "uuid": "33fffb0e-22eb-4b2e-af1a-a6445fe79b2f", 00:18:02.851 "assigned_rate_limits": { 00:18:02.851 "rw_ios_per_sec": 0, 00:18:02.851 "rw_mbytes_per_sec": 0, 00:18:02.851 "r_mbytes_per_sec": 0, 00:18:02.851 "w_mbytes_per_sec": 0 00:18:02.851 }, 00:18:02.851 "claimed": true, 00:18:02.851 "claim_type": "exclusive_write", 00:18:02.851 "zoned": false, 00:18:02.851 "supported_io_types": { 00:18:02.851 "read": true, 00:18:02.851 "write": true, 00:18:02.851 "unmap": true, 00:18:02.851 "write_zeroes": true, 00:18:02.851 "flush": true, 00:18:02.851 "reset": true, 00:18:02.851 "compare": false, 00:18:02.851 "compare_and_write": false, 00:18:02.851 "abort": true, 00:18:02.851 "nvme_admin": false, 00:18:02.851 "nvme_io": false 00:18:02.851 }, 00:18:02.851 "memory_domains": [ 00:18:02.851 { 00:18:02.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.851 "dma_device_type": 2 00:18:02.851 } 00:18:02.851 ], 00:18:02.851 "driver_specific": {} 00:18:02.851 } 00:18:02.851 ] 00:18:03.109 21:00:31 -- common/autotest_common.sh@895 -- # return 0 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.109 21:00:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.368 21:00:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.368 "name": "Existed_Raid", 00:18:03.368 "uuid": "666933d7-ea14-46b1-9c89-00025d11c077", 00:18:03.368 "strip_size_kb": 64, 00:18:03.368 "state": "configuring", 00:18:03.368 "raid_level": "concat", 00:18:03.368 "superblock": true, 00:18:03.368 "num_base_bdevs": 4, 00:18:03.368 "num_base_bdevs_discovered": 1, 00:18:03.368 "num_base_bdevs_operational": 4, 00:18:03.368 "base_bdevs_list": [ 00:18:03.368 { 00:18:03.368 "name": "BaseBdev1", 00:18:03.368 "uuid": "33fffb0e-22eb-4b2e-af1a-a6445fe79b2f", 00:18:03.368 "is_configured": true, 00:18:03.368 "data_offset": 2048, 00:18:03.368 "data_size": 63488 00:18:03.368 }, 00:18:03.368 { 00:18:03.368 "name": "BaseBdev2", 00:18:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.368 "is_configured": false, 00:18:03.368 "data_offset": 0, 00:18:03.368 "data_size": 0 00:18:03.368 }, 00:18:03.368 { 00:18:03.368 "name": "BaseBdev3", 00:18:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.368 "is_configured": false, 00:18:03.368 "data_offset": 0, 00:18:03.368 "data_size": 0 00:18:03.368 }, 00:18:03.368 { 00:18:03.368 "name": "BaseBdev4", 00:18:03.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.368 "is_configured": false, 00:18:03.368 "data_offset": 0, 00:18:03.368 "data_size": 0 00:18:03.368 } 00:18:03.368 ] 00:18:03.368 }' 00:18:03.368 21:00:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.368 21:00:31 -- common/autotest_common.sh@10 -- # set +x 00:18:03.936 21:00:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:04.193 [2024-06-09 21:00:32.141235] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.193 [2024-06-09 21:00:32.141453] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:04.193 21:00:32 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:04.193 21:00:32 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:04.451 21:00:32 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:04.709 BaseBdev1 00:18:04.709 21:00:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:04.709 21:00:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:04.709 21:00:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:04.709 21:00:32 -- common/autotest_common.sh@889 -- # local i 00:18:04.709 21:00:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:04.709 21:00:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:04.709 21:00:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.967 21:00:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:05.225 [ 00:18:05.225 { 00:18:05.225 "name": "BaseBdev1", 00:18:05.225 "aliases": [ 00:18:05.225 "c5576eaf-c561-4630-9a62-3e2645f676bd" 00:18:05.225 ], 00:18:05.225 "product_name": "Malloc disk", 00:18:05.225 "block_size": 512, 00:18:05.225 "num_blocks": 65536, 00:18:05.225 "uuid": "c5576eaf-c561-4630-9a62-3e2645f676bd", 00:18:05.225 "assigned_rate_limits": { 00:18:05.225 "rw_ios_per_sec": 0, 00:18:05.225 "rw_mbytes_per_sec": 0, 00:18:05.225 "r_mbytes_per_sec": 0, 00:18:05.225 "w_mbytes_per_sec": 0 00:18:05.225 }, 00:18:05.225 "claimed": false, 00:18:05.225 "zoned": false, 00:18:05.225 "supported_io_types": { 00:18:05.225 "read": true, 00:18:05.225 "write": true, 00:18:05.225 "unmap": true, 00:18:05.225 "write_zeroes": true, 00:18:05.225 "flush": true, 00:18:05.225 "reset": true, 00:18:05.225 "compare": false, 00:18:05.225 "compare_and_write": false, 00:18:05.226 "abort": true, 00:18:05.226 "nvme_admin": false, 00:18:05.226 "nvme_io": false 00:18:05.226 }, 00:18:05.226 "memory_domains": [ 00:18:05.226 { 00:18:05.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.226 "dma_device_type": 2 00:18:05.226 } 00:18:05.226 ], 00:18:05.226 "driver_specific": {} 00:18:05.226 } 00:18:05.226 ] 00:18:05.226 21:00:33 -- common/autotest_common.sh@895 -- # return 0 00:18:05.226 21:00:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:05.484 [2024-06-09 21:00:33.427579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.484 [2024-06-09 21:00:33.429692] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.484 [2024-06-09 21:00:33.429909] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.484 [2024-06-09 21:00:33.430042] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.484 [2024-06-09 21:00:33.430110] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.484 [2024-06-09 21:00:33.430210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:05.484 [2024-06-09 21:00:33.430361] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.484 21:00:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.742 21:00:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.742 "name": "Existed_Raid", 00:18:05.742 "uuid": "6014fd86-33c1-4a06-98c4-afbb49d09d98", 00:18:05.742 "strip_size_kb": 64, 00:18:05.742 "state": "configuring", 00:18:05.742 "raid_level": "concat", 00:18:05.742 "superblock": true, 00:18:05.742 "num_base_bdevs": 4, 00:18:05.742 "num_base_bdevs_discovered": 1, 00:18:05.742 "num_base_bdevs_operational": 4, 00:18:05.742 "base_bdevs_list": [ 00:18:05.742 { 00:18:05.742 "name": "BaseBdev1", 00:18:05.742 "uuid": "c5576eaf-c561-4630-9a62-3e2645f676bd", 00:18:05.742 "is_configured": true, 00:18:05.742 "data_offset": 2048, 00:18:05.742 "data_size": 63488 00:18:05.742 }, 00:18:05.742 { 00:18:05.742 "name": "BaseBdev2", 00:18:05.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.742 "is_configured": false, 00:18:05.742 "data_offset": 0, 00:18:05.742 "data_size": 0 00:18:05.742 }, 00:18:05.742 { 00:18:05.742 "name": "BaseBdev3", 00:18:05.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.742 "is_configured": false, 00:18:05.742 "data_offset": 0, 00:18:05.742 "data_size": 0 00:18:05.742 }, 00:18:05.742 { 00:18:05.742 "name": "BaseBdev4", 00:18:05.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.742 "is_configured": false, 00:18:05.742 "data_offset": 0, 00:18:05.742 "data_size": 0 00:18:05.742 } 00:18:05.742 ] 00:18:05.742 }' 00:18:05.742 21:00:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.742 21:00:33 -- common/autotest_common.sh@10 -- # set +x 00:18:06.327 21:00:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.585 [2024-06-09 21:00:34.662154] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.585 BaseBdev2 00:18:06.585 21:00:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:06.585 21:00:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:06.585 21:00:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:06.585 21:00:34 -- common/autotest_common.sh@889 -- # local i 00:18:06.585 21:00:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:06.585 21:00:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:06.585 21:00:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.844 21:00:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:07.103 [ 00:18:07.103 { 00:18:07.103 "name": "BaseBdev2", 00:18:07.103 "aliases": [ 00:18:07.103 "51914090-cb09-4f64-aa2f-ec416fe7a3c7" 00:18:07.103 ], 00:18:07.103 "product_name": "Malloc disk", 00:18:07.103 "block_size": 512, 00:18:07.103 "num_blocks": 65536, 00:18:07.103 "uuid": "51914090-cb09-4f64-aa2f-ec416fe7a3c7", 00:18:07.103 "assigned_rate_limits": { 00:18:07.103 "rw_ios_per_sec": 0, 00:18:07.103 "rw_mbytes_per_sec": 0, 00:18:07.103 "r_mbytes_per_sec": 0, 00:18:07.103 "w_mbytes_per_sec": 0 00:18:07.103 }, 00:18:07.103 "claimed": true, 00:18:07.103 "claim_type": "exclusive_write", 00:18:07.103 "zoned": false, 00:18:07.103 "supported_io_types": { 00:18:07.103 "read": true, 00:18:07.103 "write": true, 00:18:07.103 "unmap": true, 00:18:07.103 "write_zeroes": true, 00:18:07.103 "flush": true, 00:18:07.103 "reset": true, 00:18:07.103 "compare": false, 00:18:07.103 "compare_and_write": false, 00:18:07.103 "abort": true, 00:18:07.103 "nvme_admin": false, 00:18:07.103 "nvme_io": false 00:18:07.103 }, 00:18:07.103 "memory_domains": [ 00:18:07.103 { 00:18:07.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.103 "dma_device_type": 2 00:18:07.103 } 00:18:07.103 ], 00:18:07.103 "driver_specific": {} 00:18:07.103 } 00:18:07.103 ] 00:18:07.103 21:00:35 -- common/autotest_common.sh@895 -- # return 0 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.103 21:00:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.362 21:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:07.362 "name": "Existed_Raid", 00:18:07.362 "uuid": "6014fd86-33c1-4a06-98c4-afbb49d09d98", 00:18:07.362 "strip_size_kb": 64, 00:18:07.362 "state": "configuring", 00:18:07.362 "raid_level": "concat", 00:18:07.362 "superblock": true, 00:18:07.362 "num_base_bdevs": 4, 00:18:07.362 "num_base_bdevs_discovered": 2, 00:18:07.362 "num_base_bdevs_operational": 4, 00:18:07.362 "base_bdevs_list": [ 00:18:07.362 { 00:18:07.362 "name": "BaseBdev1", 00:18:07.362 "uuid": "c5576eaf-c561-4630-9a62-3e2645f676bd", 00:18:07.362 "is_configured": true, 00:18:07.362 "data_offset": 2048, 00:18:07.362 "data_size": 63488 00:18:07.362 }, 00:18:07.362 { 00:18:07.362 "name": "BaseBdev2", 00:18:07.362 "uuid": "51914090-cb09-4f64-aa2f-ec416fe7a3c7", 00:18:07.362 "is_configured": true, 00:18:07.362 "data_offset": 2048, 00:18:07.362 "data_size": 63488 00:18:07.362 }, 00:18:07.362 { 00:18:07.362 "name": "BaseBdev3", 00:18:07.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.362 "is_configured": false, 00:18:07.362 "data_offset": 0, 00:18:07.362 "data_size": 0 00:18:07.362 }, 00:18:07.362 { 00:18:07.362 "name": "BaseBdev4", 00:18:07.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.362 "is_configured": false, 00:18:07.362 "data_offset": 0, 00:18:07.362 "data_size": 0 00:18:07.362 } 00:18:07.362 ] 00:18:07.362 }' 00:18:07.362 21:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:07.362 21:00:35 -- common/autotest_common.sh@10 -- # set +x 00:18:07.929 21:00:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:08.189 [2024-06-09 21:00:36.214615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.189 BaseBdev3 00:18:08.189 21:00:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:08.189 21:00:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:08.189 21:00:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:08.189 21:00:36 -- common/autotest_common.sh@889 -- # local i 00:18:08.189 21:00:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:08.189 21:00:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:08.189 21:00:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.448 21:00:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.707 [ 00:18:08.707 { 00:18:08.707 "name": "BaseBdev3", 00:18:08.707 "aliases": [ 00:18:08.707 "2637729d-470d-45eb-9d71-8512defeb500" 00:18:08.707 ], 00:18:08.707 "product_name": "Malloc disk", 00:18:08.707 "block_size": 512, 00:18:08.707 "num_blocks": 65536, 00:18:08.707 "uuid": "2637729d-470d-45eb-9d71-8512defeb500", 00:18:08.707 "assigned_rate_limits": { 00:18:08.707 "rw_ios_per_sec": 0, 00:18:08.707 "rw_mbytes_per_sec": 0, 00:18:08.707 "r_mbytes_per_sec": 0, 00:18:08.707 "w_mbytes_per_sec": 0 00:18:08.707 }, 00:18:08.707 "claimed": true, 00:18:08.707 "claim_type": "exclusive_write", 00:18:08.707 "zoned": false, 00:18:08.707 "supported_io_types": { 00:18:08.707 "read": true, 00:18:08.707 "write": true, 00:18:08.707 "unmap": true, 00:18:08.707 "write_zeroes": true, 00:18:08.707 "flush": true, 00:18:08.707 "reset": true, 00:18:08.707 "compare": false, 00:18:08.707 "compare_and_write": false, 00:18:08.707 "abort": true, 00:18:08.707 "nvme_admin": false, 00:18:08.707 "nvme_io": false 00:18:08.707 }, 00:18:08.707 "memory_domains": [ 00:18:08.707 { 00:18:08.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.707 "dma_device_type": 2 00:18:08.707 } 00:18:08.707 ], 00:18:08.707 "driver_specific": {} 00:18:08.707 } 00:18:08.707 ] 00:18:08.707 21:00:36 -- common/autotest_common.sh@895 -- # return 0 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.707 21:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.966 21:00:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.966 "name": "Existed_Raid", 00:18:08.966 "uuid": "6014fd86-33c1-4a06-98c4-afbb49d09d98", 00:18:08.966 "strip_size_kb": 64, 00:18:08.966 "state": "configuring", 00:18:08.966 "raid_level": "concat", 00:18:08.966 "superblock": true, 00:18:08.966 "num_base_bdevs": 4, 00:18:08.966 "num_base_bdevs_discovered": 3, 00:18:08.966 "num_base_bdevs_operational": 4, 00:18:08.966 "base_bdevs_list": [ 00:18:08.966 { 00:18:08.966 "name": "BaseBdev1", 00:18:08.966 "uuid": "c5576eaf-c561-4630-9a62-3e2645f676bd", 00:18:08.966 "is_configured": true, 00:18:08.966 "data_offset": 2048, 00:18:08.966 "data_size": 63488 00:18:08.966 }, 00:18:08.966 { 00:18:08.966 "name": "BaseBdev2", 00:18:08.966 "uuid": "51914090-cb09-4f64-aa2f-ec416fe7a3c7", 00:18:08.966 "is_configured": true, 00:18:08.966 "data_offset": 2048, 00:18:08.966 "data_size": 63488 00:18:08.966 }, 00:18:08.966 { 00:18:08.966 "name": "BaseBdev3", 00:18:08.966 "uuid": "2637729d-470d-45eb-9d71-8512defeb500", 00:18:08.966 "is_configured": true, 00:18:08.966 "data_offset": 2048, 00:18:08.966 "data_size": 63488 00:18:08.966 }, 00:18:08.966 { 00:18:08.966 "name": "BaseBdev4", 00:18:08.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.966 "is_configured": false, 00:18:08.966 "data_offset": 0, 00:18:08.966 "data_size": 0 00:18:08.966 } 00:18:08.966 ] 00:18:08.966 }' 00:18:08.966 21:00:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.966 21:00:36 -- common/autotest_common.sh@10 -- # set +x 00:18:09.533 21:00:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:09.792 [2024-06-09 21:00:37.886804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:09.792 [2024-06-09 21:00:37.887535] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:09.792 [2024-06-09 21:00:37.887660] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:09.792 [2024-06-09 21:00:37.887821] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:09.792 [2024-06-09 21:00:37.888191] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:09.792 BaseBdev4 00:18:09.792 [2024-06-09 21:00:37.888343] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:09.792 [2024-06-09 21:00:37.888586] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.792 21:00:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:09.792 21:00:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:09.792 21:00:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:09.793 21:00:37 -- common/autotest_common.sh@889 -- # local i 00:18:09.793 21:00:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:09.793 21:00:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:09.793 21:00:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.051 21:00:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:10.310 [ 00:18:10.310 { 00:18:10.310 "name": "BaseBdev4", 00:18:10.310 "aliases": [ 00:18:10.310 "9780f084-1562-4ce7-8e03-ec8885e42dee" 00:18:10.310 ], 00:18:10.310 "product_name": "Malloc disk", 00:18:10.310 "block_size": 512, 00:18:10.310 "num_blocks": 65536, 00:18:10.311 "uuid": "9780f084-1562-4ce7-8e03-ec8885e42dee", 00:18:10.311 "assigned_rate_limits": { 00:18:10.311 "rw_ios_per_sec": 0, 00:18:10.311 "rw_mbytes_per_sec": 0, 00:18:10.311 "r_mbytes_per_sec": 0, 00:18:10.311 "w_mbytes_per_sec": 0 00:18:10.311 }, 00:18:10.311 "claimed": true, 00:18:10.311 "claim_type": "exclusive_write", 00:18:10.311 "zoned": false, 00:18:10.311 "supported_io_types": { 00:18:10.311 "read": true, 00:18:10.311 "write": true, 00:18:10.311 "unmap": true, 00:18:10.311 "write_zeroes": true, 00:18:10.311 "flush": true, 00:18:10.311 "reset": true, 00:18:10.311 "compare": false, 00:18:10.311 "compare_and_write": false, 00:18:10.311 "abort": true, 00:18:10.311 "nvme_admin": false, 00:18:10.311 "nvme_io": false 00:18:10.311 }, 00:18:10.311 "memory_domains": [ 00:18:10.311 { 00:18:10.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.311 "dma_device_type": 2 00:18:10.311 } 00:18:10.311 ], 00:18:10.311 "driver_specific": {} 00:18:10.311 } 00:18:10.311 ] 00:18:10.311 21:00:38 -- common/autotest_common.sh@895 -- # return 0 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.311 21:00:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.570 21:00:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.570 "name": "Existed_Raid", 00:18:10.570 "uuid": "6014fd86-33c1-4a06-98c4-afbb49d09d98", 00:18:10.570 "strip_size_kb": 64, 00:18:10.570 "state": "online", 00:18:10.570 "raid_level": "concat", 00:18:10.570 "superblock": true, 00:18:10.570 "num_base_bdevs": 4, 00:18:10.570 "num_base_bdevs_discovered": 4, 00:18:10.570 "num_base_bdevs_operational": 4, 00:18:10.570 "base_bdevs_list": [ 00:18:10.570 { 00:18:10.570 "name": "BaseBdev1", 00:18:10.570 "uuid": "c5576eaf-c561-4630-9a62-3e2645f676bd", 00:18:10.570 "is_configured": true, 00:18:10.570 "data_offset": 2048, 00:18:10.570 "data_size": 63488 00:18:10.570 }, 00:18:10.570 { 00:18:10.570 "name": "BaseBdev2", 00:18:10.570 "uuid": "51914090-cb09-4f64-aa2f-ec416fe7a3c7", 00:18:10.570 "is_configured": true, 00:18:10.570 "data_offset": 2048, 00:18:10.570 "data_size": 63488 00:18:10.570 }, 00:18:10.570 { 00:18:10.570 "name": "BaseBdev3", 00:18:10.570 "uuid": "2637729d-470d-45eb-9d71-8512defeb500", 00:18:10.570 "is_configured": true, 00:18:10.570 "data_offset": 2048, 00:18:10.570 "data_size": 63488 00:18:10.570 }, 00:18:10.570 { 00:18:10.570 "name": "BaseBdev4", 00:18:10.570 "uuid": "9780f084-1562-4ce7-8e03-ec8885e42dee", 00:18:10.570 "is_configured": true, 00:18:10.570 "data_offset": 2048, 00:18:10.570 "data_size": 63488 00:18:10.570 } 00:18:10.570 ] 00:18:10.570 }' 00:18:10.570 21:00:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.570 21:00:38 -- common/autotest_common.sh@10 -- # set +x 00:18:11.138 21:00:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:11.138 [2024-06-09 21:00:39.299204] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.138 [2024-06-09 21:00:39.299456] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.138 [2024-06-09 21:00:39.299630] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.396 21:00:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.655 21:00:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.655 "name": "Existed_Raid", 00:18:11.655 "uuid": "6014fd86-33c1-4a06-98c4-afbb49d09d98", 00:18:11.655 "strip_size_kb": 64, 00:18:11.655 "state": "offline", 00:18:11.655 "raid_level": "concat", 00:18:11.655 "superblock": true, 00:18:11.655 "num_base_bdevs": 4, 00:18:11.655 "num_base_bdevs_discovered": 3, 00:18:11.655 "num_base_bdevs_operational": 3, 00:18:11.655 "base_bdevs_list": [ 00:18:11.655 { 00:18:11.655 "name": null, 00:18:11.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.655 "is_configured": false, 00:18:11.655 "data_offset": 2048, 00:18:11.655 "data_size": 63488 00:18:11.655 }, 00:18:11.655 { 00:18:11.655 "name": "BaseBdev2", 00:18:11.655 "uuid": "51914090-cb09-4f64-aa2f-ec416fe7a3c7", 00:18:11.655 "is_configured": true, 00:18:11.655 "data_offset": 2048, 00:18:11.655 "data_size": 63488 00:18:11.655 }, 00:18:11.655 { 00:18:11.655 "name": "BaseBdev3", 00:18:11.655 "uuid": "2637729d-470d-45eb-9d71-8512defeb500", 00:18:11.655 "is_configured": true, 00:18:11.655 "data_offset": 2048, 00:18:11.655 "data_size": 63488 00:18:11.655 }, 00:18:11.655 { 00:18:11.655 "name": "BaseBdev4", 00:18:11.655 "uuid": "9780f084-1562-4ce7-8e03-ec8885e42dee", 00:18:11.655 "is_configured": true, 00:18:11.655 "data_offset": 2048, 00:18:11.655 "data_size": 63488 00:18:11.655 } 00:18:11.655 ] 00:18:11.655 }' 00:18:11.655 21:00:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.655 21:00:39 -- common/autotest_common.sh@10 -- # set +x 00:18:12.222 21:00:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:12.222 21:00:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:12.222 21:00:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.222 21:00:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:12.576 21:00:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:12.576 21:00:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:12.576 21:00:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:12.576 [2024-06-09 21:00:40.701226] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:12.837 21:00:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:12.837 21:00:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:12.837 21:00:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.837 21:00:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:13.096 21:00:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:13.096 21:00:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:13.096 21:00:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:13.096 [2024-06-09 21:00:41.267624] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:13.354 21:00:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:13.354 21:00:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:13.354 21:00:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.354 21:00:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:13.612 21:00:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:13.612 21:00:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:13.612 21:00:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:13.612 [2024-06-09 21:00:41.764684] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:13.612 [2024-06-09 21:00:41.765119] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:13.870 21:00:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:13.870 21:00:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:13.870 21:00:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.870 21:00:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:13.870 21:00:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:13.870 21:00:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:13.871 21:00:42 -- bdev/bdev_raid.sh@287 -- # killprocess 119454 00:18:13.871 21:00:42 -- common/autotest_common.sh@926 -- # '[' -z 119454 ']' 00:18:13.871 21:00:42 -- common/autotest_common.sh@930 -- # kill -0 119454 00:18:13.871 21:00:42 -- common/autotest_common.sh@931 -- # uname 00:18:13.871 21:00:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:13.871 21:00:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119454 00:18:14.128 killing process with pid 119454 00:18:14.128 21:00:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:14.128 21:00:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:14.128 21:00:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119454' 00:18:14.128 21:00:42 -- common/autotest_common.sh@945 -- # kill 119454 00:18:14.128 21:00:42 -- common/autotest_common.sh@950 -- # wait 119454 00:18:14.128 [2024-06-09 21:00:42.060263] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.128 [2024-06-09 21:00:42.060408] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.064 ************************************ 00:18:15.064 END TEST raid_state_function_test_sb 00:18:15.064 ************************************ 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:15.064 00:18:15.064 real 0m15.356s 00:18:15.064 user 0m27.206s 00:18:15.064 sys 0m1.888s 00:18:15.064 21:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.064 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:15.064 21:00:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:15.064 21:00:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:15.064 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:18:15.064 ************************************ 00:18:15.064 START TEST raid_superblock_test 00:18:15.064 ************************************ 00:18:15.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:15.064 21:00:43 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@357 -- # raid_pid=119919 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119919 /var/tmp/spdk-raid.sock 00:18:15.064 21:00:43 -- common/autotest_common.sh@819 -- # '[' -z 119919 ']' 00:18:15.064 21:00:43 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:15.064 21:00:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:15.064 21:00:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:15.064 21:00:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:15.064 21:00:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:15.064 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:18:15.064 [2024-06-09 21:00:43.221767] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:15.064 [2024-06-09 21:00:43.221992] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119919 ] 00:18:15.322 [2024-06-09 21:00:43.386289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.581 [2024-06-09 21:00:43.575393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.840 [2024-06-09 21:00:43.764815] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.099 21:00:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:16.099 21:00:44 -- common/autotest_common.sh@852 -- # return 0 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:16.099 21:00:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:16.357 malloc1 00:18:16.357 21:00:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:16.616 [2024-06-09 21:00:44.553391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:16.616 [2024-06-09 21:00:44.553501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.616 [2024-06-09 21:00:44.553548] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:16.616 [2024-06-09 21:00:44.553600] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.616 [2024-06-09 21:00:44.555994] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.616 [2024-06-09 21:00:44.556041] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:16.616 pt1 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:16.616 21:00:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:16.616 malloc2 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.873 [2024-06-09 21:00:44.976784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.873 [2024-06-09 21:00:44.976881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.873 [2024-06-09 21:00:44.976925] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:16.873 [2024-06-09 21:00:44.976986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.873 [2024-06-09 21:00:44.979248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.873 [2024-06-09 21:00:44.979312] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.873 pt2 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:16.873 21:00:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:16.874 21:00:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:17.130 malloc3 00:18:17.130 21:00:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:17.388 [2024-06-09 21:00:45.477882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:17.388 [2024-06-09 21:00:45.477953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.388 [2024-06-09 21:00:45.477996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:17.388 [2024-06-09 21:00:45.478040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.388 [2024-06-09 21:00:45.480324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.388 [2024-06-09 21:00:45.480377] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:17.388 pt3 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:17.388 21:00:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:17.645 malloc4 00:18:17.646 21:00:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:17.904 [2024-06-09 21:00:45.956180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:17.904 [2024-06-09 21:00:45.956253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.904 [2024-06-09 21:00:45.956286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:17.904 [2024-06-09 21:00:45.956328] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.904 [2024-06-09 21:00:45.958525] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.904 [2024-06-09 21:00:45.958576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:17.904 pt4 00:18:17.904 21:00:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:17.904 21:00:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:17.904 21:00:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:18.162 [2024-06-09 21:00:46.144281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.162 [2024-06-09 21:00:46.146241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.162 [2024-06-09 21:00:46.146332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:18.162 [2024-06-09 21:00:46.146417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:18.162 [2024-06-09 21:00:46.146647] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:18.162 [2024-06-09 21:00:46.146661] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:18.162 [2024-06-09 21:00:46.146762] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:18.162 [2024-06-09 21:00:46.147152] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:18.162 [2024-06-09 21:00:46.147166] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:18.162 [2024-06-09 21:00:46.147308] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.162 21:00:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.421 21:00:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.421 "name": "raid_bdev1", 00:18:18.421 "uuid": "3a0d85e6-3cdc-4e42-bf01-72b08a9826dc", 00:18:18.421 "strip_size_kb": 64, 00:18:18.421 "state": "online", 00:18:18.421 "raid_level": "concat", 00:18:18.421 "superblock": true, 00:18:18.421 "num_base_bdevs": 4, 00:18:18.421 "num_base_bdevs_discovered": 4, 00:18:18.421 "num_base_bdevs_operational": 4, 00:18:18.421 "base_bdevs_list": [ 00:18:18.421 { 00:18:18.421 "name": "pt1", 00:18:18.421 "uuid": "05b051a7-256d-5621-9e6f-94e47b063859", 00:18:18.421 "is_configured": true, 00:18:18.421 "data_offset": 2048, 00:18:18.421 "data_size": 63488 00:18:18.421 }, 00:18:18.421 { 00:18:18.421 "name": "pt2", 00:18:18.421 "uuid": "7ed0a773-9681-52f7-8040-e6281213c569", 00:18:18.421 "is_configured": true, 00:18:18.421 "data_offset": 2048, 00:18:18.421 "data_size": 63488 00:18:18.421 }, 00:18:18.421 { 00:18:18.421 "name": "pt3", 00:18:18.421 "uuid": "31094e4c-7d5e-551a-a530-438d9d3b7622", 00:18:18.421 "is_configured": true, 00:18:18.421 "data_offset": 2048, 00:18:18.421 "data_size": 63488 00:18:18.421 }, 00:18:18.421 { 00:18:18.421 "name": "pt4", 00:18:18.421 "uuid": "eccb401e-87ea-54e9-8bc6-d680b16f6b76", 00:18:18.421 "is_configured": true, 00:18:18.421 "data_offset": 2048, 00:18:18.421 "data_size": 63488 00:18:18.421 } 00:18:18.421 ] 00:18:18.421 }' 00:18:18.421 21:00:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.421 21:00:46 -- common/autotest_common.sh@10 -- # set +x 00:18:18.987 21:00:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:18.987 21:00:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:18.987 [2024-06-09 21:00:47.100560] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.987 21:00:47 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3a0d85e6-3cdc-4e42-bf01-72b08a9826dc 00:18:18.987 21:00:47 -- bdev/bdev_raid.sh@380 -- # '[' -z 3a0d85e6-3cdc-4e42-bf01-72b08a9826dc ']' 00:18:18.987 21:00:47 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:19.245 [2024-06-09 21:00:47.296398] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.245 [2024-06-09 21:00:47.296420] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.245 [2024-06-09 21:00:47.296485] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.245 [2024-06-09 21:00:47.296541] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.245 [2024-06-09 21:00:47.296551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:19.245 21:00:47 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.245 21:00:47 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:19.502 21:00:47 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:19.502 21:00:47 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:19.502 21:00:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.502 21:00:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:19.759 21:00:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.759 21:00:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:20.017 21:00:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:20.017 21:00:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:20.017 21:00:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:20.017 21:00:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:20.276 21:00:48 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:20.276 21:00:48 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:20.534 21:00:48 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:20.534 21:00:48 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:20.534 21:00:48 -- common/autotest_common.sh@640 -- # local es=0 00:18:20.534 21:00:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:20.534 21:00:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.534 21:00:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:20.534 21:00:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.534 21:00:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:20.534 21:00:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.534 21:00:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:20.534 21:00:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.534 21:00:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:20.534 21:00:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:20.793 [2024-06-09 21:00:48.768644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:20.793 [2024-06-09 21:00:48.770535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:20.793 [2024-06-09 21:00:48.770585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:20.793 [2024-06-09 21:00:48.770629] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:20.793 [2024-06-09 21:00:48.770673] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:20.793 [2024-06-09 21:00:48.770739] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:20.793 [2024-06-09 21:00:48.770774] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:20.793 [2024-06-09 21:00:48.770834] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:20.793 [2024-06-09 21:00:48.770861] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.793 [2024-06-09 21:00:48.770870] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:18:20.793 request: 00:18:20.793 { 00:18:20.793 "name": "raid_bdev1", 00:18:20.793 "raid_level": "concat", 00:18:20.793 "base_bdevs": [ 00:18:20.793 "malloc1", 00:18:20.793 "malloc2", 00:18:20.793 "malloc3", 00:18:20.793 "malloc4" 00:18:20.793 ], 00:18:20.793 "superblock": false, 00:18:20.793 "strip_size_kb": 64, 00:18:20.793 "method": "bdev_raid_create", 00:18:20.793 "req_id": 1 00:18:20.793 } 00:18:20.793 Got JSON-RPC error response 00:18:20.793 response: 00:18:20.793 { 00:18:20.793 "code": -17, 00:18:20.793 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:20.793 } 00:18:20.793 21:00:48 -- common/autotest_common.sh@643 -- # es=1 00:18:20.793 21:00:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:20.793 21:00:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:20.793 21:00:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:20.793 21:00:48 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.793 21:00:48 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:21.052 21:00:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:21.052 21:00:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:21.052 21:00:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.052 [2024-06-09 21:00:49.160674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.052 [2024-06-09 21:00:49.160739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.052 [2024-06-09 21:00:49.160766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:21.052 [2024-06-09 21:00:49.160792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.052 [2024-06-09 21:00:49.163017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.052 [2024-06-09 21:00:49.163088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.052 [2024-06-09 21:00:49.163181] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:21.052 [2024-06-09 21:00:49.163242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.052 pt1 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.052 21:00:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.311 21:00:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.311 "name": "raid_bdev1", 00:18:21.311 "uuid": "3a0d85e6-3cdc-4e42-bf01-72b08a9826dc", 00:18:21.311 "strip_size_kb": 64, 00:18:21.311 "state": "configuring", 00:18:21.311 "raid_level": "concat", 00:18:21.311 "superblock": true, 00:18:21.311 "num_base_bdevs": 4, 00:18:21.311 "num_base_bdevs_discovered": 1, 00:18:21.311 "num_base_bdevs_operational": 4, 00:18:21.311 "base_bdevs_list": [ 00:18:21.311 { 00:18:21.311 "name": "pt1", 00:18:21.311 "uuid": "05b051a7-256d-5621-9e6f-94e47b063859", 00:18:21.311 "is_configured": true, 00:18:21.311 "data_offset": 2048, 00:18:21.311 "data_size": 63488 00:18:21.311 }, 00:18:21.311 { 00:18:21.311 "name": null, 00:18:21.311 "uuid": "7ed0a773-9681-52f7-8040-e6281213c569", 00:18:21.311 "is_configured": false, 00:18:21.311 "data_offset": 2048, 00:18:21.311 "data_size": 63488 00:18:21.311 }, 00:18:21.311 { 00:18:21.311 "name": null, 00:18:21.311 "uuid": "31094e4c-7d5e-551a-a530-438d9d3b7622", 00:18:21.311 "is_configured": false, 00:18:21.311 "data_offset": 2048, 00:18:21.311 "data_size": 63488 00:18:21.311 }, 00:18:21.311 { 00:18:21.311 "name": null, 00:18:21.311 "uuid": "eccb401e-87ea-54e9-8bc6-d680b16f6b76", 00:18:21.311 "is_configured": false, 00:18:21.311 "data_offset": 2048, 00:18:21.311 "data_size": 63488 00:18:21.311 } 00:18:21.311 ] 00:18:21.311 }' 00:18:21.311 21:00:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.311 21:00:49 -- common/autotest_common.sh@10 -- # set +x 00:18:21.879 21:00:49 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:21.879 21:00:49 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.138 [2024-06-09 21:00:50.152870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.138 [2024-06-09 21:00:50.152927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.138 [2024-06-09 21:00:50.152962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:22.138 [2024-06-09 21:00:50.152982] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.138 [2024-06-09 21:00:50.153398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.138 [2024-06-09 21:00:50.153453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.138 [2024-06-09 21:00:50.153559] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:22.138 [2024-06-09 21:00:50.153583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.138 pt2 00:18:22.138 21:00:50 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:22.397 [2024-06-09 21:00:50.356919] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.397 "name": "raid_bdev1", 00:18:22.397 "uuid": "3a0d85e6-3cdc-4e42-bf01-72b08a9826dc", 00:18:22.397 "strip_size_kb": 64, 00:18:22.397 "state": "configuring", 00:18:22.397 "raid_level": "concat", 00:18:22.397 "superblock": true, 00:18:22.397 "num_base_bdevs": 4, 00:18:22.397 "num_base_bdevs_discovered": 1, 00:18:22.397 "num_base_bdevs_operational": 4, 00:18:22.397 "base_bdevs_list": [ 00:18:22.397 { 00:18:22.397 "name": "pt1", 00:18:22.397 "uuid": "05b051a7-256d-5621-9e6f-94e47b063859", 00:18:22.397 "is_configured": true, 00:18:22.397 "data_offset": 2048, 00:18:22.397 "data_size": 63488 00:18:22.397 }, 00:18:22.397 { 00:18:22.397 "name": null, 00:18:22.397 "uuid": "7ed0a773-9681-52f7-8040-e6281213c569", 00:18:22.397 "is_configured": false, 00:18:22.397 "data_offset": 2048, 00:18:22.397 "data_size": 63488 00:18:22.397 }, 00:18:22.397 { 00:18:22.397 "name": null, 00:18:22.397 "uuid": "31094e4c-7d5e-551a-a530-438d9d3b7622", 00:18:22.397 "is_configured": false, 00:18:22.397 "data_offset": 2048, 00:18:22.397 "data_size": 63488 00:18:22.397 }, 00:18:22.397 { 00:18:22.397 "name": null, 00:18:22.397 "uuid": "eccb401e-87ea-54e9-8bc6-d680b16f6b76", 00:18:22.397 "is_configured": false, 00:18:22.397 "data_offset": 2048, 00:18:22.397 "data_size": 63488 00:18:22.397 } 00:18:22.397 ] 00:18:22.397 }' 00:18:22.397 21:00:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.397 21:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:22.965 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:22.965 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:22.965 21:00:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:23.223 [2024-06-09 21:00:51.329123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:23.223 [2024-06-09 21:00:51.329369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.223 [2024-06-09 21:00:51.329444] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:23.223 [2024-06-09 21:00:51.329640] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.223 [2024-06-09 21:00:51.330161] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.223 [2024-06-09 21:00:51.330334] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:23.223 [2024-06-09 21:00:51.330527] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:23.223 [2024-06-09 21:00:51.330589] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.223 pt2 00:18:23.223 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:23.223 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:23.223 21:00:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:23.482 [2024-06-09 21:00:51.585160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:23.482 [2024-06-09 21:00:51.585363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.482 [2024-06-09 21:00:51.585427] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:23.482 [2024-06-09 21:00:51.585615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.482 [2024-06-09 21:00:51.586172] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.482 [2024-06-09 21:00:51.586360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:23.482 [2024-06-09 21:00:51.586555] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:23.482 [2024-06-09 21:00:51.586680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:23.482 pt3 00:18:23.482 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:23.482 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:23.482 21:00:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:23.741 [2024-06-09 21:00:51.825213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:23.741 [2024-06-09 21:00:51.825422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.741 [2024-06-09 21:00:51.825496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:23.741 [2024-06-09 21:00:51.825652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.741 [2024-06-09 21:00:51.826084] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.741 [2024-06-09 21:00:51.826273] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:23.741 [2024-06-09 21:00:51.826495] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:23.741 [2024-06-09 21:00:51.826622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:23.741 [2024-06-09 21:00:51.826798] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:18:23.741 [2024-06-09 21:00:51.826951] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:23.741 [2024-06-09 21:00:51.827093] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:23.741 [2024-06-09 21:00:51.827547] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:18:23.741 [2024-06-09 21:00:51.827676] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:18:23.741 [2024-06-09 21:00:51.827896] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.741 pt4 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.741 21:00:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.000 21:00:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.000 "name": "raid_bdev1", 00:18:24.000 "uuid": "3a0d85e6-3cdc-4e42-bf01-72b08a9826dc", 00:18:24.000 "strip_size_kb": 64, 00:18:24.000 "state": "online", 00:18:24.000 "raid_level": "concat", 00:18:24.000 "superblock": true, 00:18:24.000 "num_base_bdevs": 4, 00:18:24.000 "num_base_bdevs_discovered": 4, 00:18:24.000 "num_base_bdevs_operational": 4, 00:18:24.000 "base_bdevs_list": [ 00:18:24.000 { 00:18:24.000 "name": "pt1", 00:18:24.000 "uuid": "05b051a7-256d-5621-9e6f-94e47b063859", 00:18:24.000 "is_configured": true, 00:18:24.000 "data_offset": 2048, 00:18:24.000 "data_size": 63488 00:18:24.000 }, 00:18:24.000 { 00:18:24.000 "name": "pt2", 00:18:24.000 "uuid": "7ed0a773-9681-52f7-8040-e6281213c569", 00:18:24.000 "is_configured": true, 00:18:24.000 "data_offset": 2048, 00:18:24.000 "data_size": 63488 00:18:24.000 }, 00:18:24.000 { 00:18:24.000 "name": "pt3", 00:18:24.000 "uuid": "31094e4c-7d5e-551a-a530-438d9d3b7622", 00:18:24.000 "is_configured": true, 00:18:24.000 "data_offset": 2048, 00:18:24.000 "data_size": 63488 00:18:24.000 }, 00:18:24.000 { 00:18:24.000 "name": "pt4", 00:18:24.000 "uuid": "eccb401e-87ea-54e9-8bc6-d680b16f6b76", 00:18:24.000 "is_configured": true, 00:18:24.000 "data_offset": 2048, 00:18:24.000 "data_size": 63488 00:18:24.000 } 00:18:24.000 ] 00:18:24.000 }' 00:18:24.000 21:00:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.000 21:00:52 -- common/autotest_common.sh@10 -- # set +x 00:18:24.566 21:00:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:24.566 21:00:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:24.825 [2024-06-09 21:00:52.885651] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.825 21:00:52 -- bdev/bdev_raid.sh@430 -- # '[' 3a0d85e6-3cdc-4e42-bf01-72b08a9826dc '!=' 3a0d85e6-3cdc-4e42-bf01-72b08a9826dc ']' 00:18:24.825 21:00:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:24.825 21:00:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:24.825 21:00:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:24.825 21:00:52 -- bdev/bdev_raid.sh@511 -- # killprocess 119919 00:18:24.825 21:00:52 -- common/autotest_common.sh@926 -- # '[' -z 119919 ']' 00:18:24.825 21:00:52 -- common/autotest_common.sh@930 -- # kill -0 119919 00:18:24.825 21:00:52 -- common/autotest_common.sh@931 -- # uname 00:18:24.825 21:00:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:24.825 21:00:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119919 00:18:24.825 21:00:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:24.825 21:00:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:24.825 21:00:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119919' 00:18:24.825 killing process with pid 119919 00:18:24.825 21:00:52 -- common/autotest_common.sh@945 -- # kill 119919 00:18:24.825 [2024-06-09 21:00:52.921563] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.825 21:00:52 -- common/autotest_common.sh@950 -- # wait 119919 00:18:24.825 [2024-06-09 21:00:52.921678] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.825 [2024-06-09 21:00:52.921928] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.825 [2024-06-09 21:00:52.922050] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:18:25.083 [2024-06-09 21:00:53.195995] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.461 ************************************ 00:18:26.461 END TEST raid_superblock_test 00:18:26.461 ************************************ 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:26.461 00:18:26.461 real 0m11.071s 00:18:26.461 user 0m19.019s 00:18:26.461 sys 0m1.413s 00:18:26.461 21:00:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.461 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:26.461 21:00:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:26.461 21:00:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.461 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.461 ************************************ 00:18:26.461 START TEST raid_state_function_test 00:18:26.461 ************************************ 00:18:26.461 21:00:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=120228 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:26.461 Process raid pid: 120228 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120228' 00:18:26.461 21:00:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120228 /var/tmp/spdk-raid.sock 00:18:26.461 21:00:54 -- common/autotest_common.sh@819 -- # '[' -z 120228 ']' 00:18:26.461 21:00:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:26.461 21:00:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:26.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:26.461 21:00:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:26.461 21:00:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:26.461 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.461 [2024-06-09 21:00:54.341151] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:26.461 [2024-06-09 21:00:54.341305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.461 [2024-06-09 21:00:54.490776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.720 [2024-06-09 21:00:54.683386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.720 [2024-06-09 21:00:54.873795] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.289 21:00:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:27.289 21:00:55 -- common/autotest_common.sh@852 -- # return 0 00:18:27.289 21:00:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:27.289 [2024-06-09 21:00:55.459885] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.289 [2024-06-09 21:00:55.459975] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.289 [2024-06-09 21:00:55.459988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.289 [2024-06-09 21:00:55.460016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.289 [2024-06-09 21:00:55.460023] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.289 [2024-06-09 21:00:55.460066] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.289 [2024-06-09 21:00:55.460075] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:27.289 [2024-06-09 21:00:55.460098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:27.547 21:00:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:27.547 21:00:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.548 21:00:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.806 21:00:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.806 "name": "Existed_Raid", 00:18:27.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.806 "strip_size_kb": 0, 00:18:27.806 "state": "configuring", 00:18:27.806 "raid_level": "raid1", 00:18:27.806 "superblock": false, 00:18:27.807 "num_base_bdevs": 4, 00:18:27.807 "num_base_bdevs_discovered": 0, 00:18:27.807 "num_base_bdevs_operational": 4, 00:18:27.807 "base_bdevs_list": [ 00:18:27.807 { 00:18:27.807 "name": "BaseBdev1", 00:18:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.807 "is_configured": false, 00:18:27.807 "data_offset": 0, 00:18:27.807 "data_size": 0 00:18:27.807 }, 00:18:27.807 { 00:18:27.807 "name": "BaseBdev2", 00:18:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.807 "is_configured": false, 00:18:27.807 "data_offset": 0, 00:18:27.807 "data_size": 0 00:18:27.807 }, 00:18:27.807 { 00:18:27.807 "name": "BaseBdev3", 00:18:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.807 "is_configured": false, 00:18:27.807 "data_offset": 0, 00:18:27.807 "data_size": 0 00:18:27.807 }, 00:18:27.807 { 00:18:27.807 "name": "BaseBdev4", 00:18:27.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.807 "is_configured": false, 00:18:27.807 "data_offset": 0, 00:18:27.807 "data_size": 0 00:18:27.807 } 00:18:27.807 ] 00:18:27.807 }' 00:18:27.807 21:00:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.807 21:00:55 -- common/autotest_common.sh@10 -- # set +x 00:18:28.374 21:00:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:28.374 [2024-06-09 21:00:56.507948] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.374 [2024-06-09 21:00:56.507980] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:28.374 21:00:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:28.633 [2024-06-09 21:00:56.708001] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.633 [2024-06-09 21:00:56.708056] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.633 [2024-06-09 21:00:56.708067] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.633 [2024-06-09 21:00:56.708093] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.633 [2024-06-09 21:00:56.708101] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.633 [2024-06-09 21:00:56.708135] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.633 [2024-06-09 21:00:56.708143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:28.633 [2024-06-09 21:00:56.708164] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:28.633 21:00:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.891 [2024-06-09 21:00:56.998231] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.891 BaseBdev1 00:18:28.891 21:00:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:28.891 21:00:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:28.891 21:00:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:28.891 21:00:57 -- common/autotest_common.sh@889 -- # local i 00:18:28.891 21:00:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:28.891 21:00:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:28.891 21:00:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.150 21:00:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.409 [ 00:18:29.409 { 00:18:29.409 "name": "BaseBdev1", 00:18:29.409 "aliases": [ 00:18:29.409 "625957da-ec0f-4427-b104-94f5456f9b97" 00:18:29.409 ], 00:18:29.409 "product_name": "Malloc disk", 00:18:29.409 "block_size": 512, 00:18:29.409 "num_blocks": 65536, 00:18:29.409 "uuid": "625957da-ec0f-4427-b104-94f5456f9b97", 00:18:29.409 "assigned_rate_limits": { 00:18:29.409 "rw_ios_per_sec": 0, 00:18:29.409 "rw_mbytes_per_sec": 0, 00:18:29.409 "r_mbytes_per_sec": 0, 00:18:29.409 "w_mbytes_per_sec": 0 00:18:29.409 }, 00:18:29.409 "claimed": true, 00:18:29.409 "claim_type": "exclusive_write", 00:18:29.409 "zoned": false, 00:18:29.409 "supported_io_types": { 00:18:29.409 "read": true, 00:18:29.409 "write": true, 00:18:29.409 "unmap": true, 00:18:29.409 "write_zeroes": true, 00:18:29.409 "flush": true, 00:18:29.409 "reset": true, 00:18:29.409 "compare": false, 00:18:29.409 "compare_and_write": false, 00:18:29.409 "abort": true, 00:18:29.409 "nvme_admin": false, 00:18:29.409 "nvme_io": false 00:18:29.409 }, 00:18:29.409 "memory_domains": [ 00:18:29.409 { 00:18:29.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.409 "dma_device_type": 2 00:18:29.409 } 00:18:29.409 ], 00:18:29.409 "driver_specific": {} 00:18:29.409 } 00:18:29.409 ] 00:18:29.409 21:00:57 -- common/autotest_common.sh@895 -- # return 0 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.409 21:00:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.667 21:00:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.667 "name": "Existed_Raid", 00:18:29.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.667 "strip_size_kb": 0, 00:18:29.667 "state": "configuring", 00:18:29.667 "raid_level": "raid1", 00:18:29.667 "superblock": false, 00:18:29.667 "num_base_bdevs": 4, 00:18:29.667 "num_base_bdevs_discovered": 1, 00:18:29.667 "num_base_bdevs_operational": 4, 00:18:29.667 "base_bdevs_list": [ 00:18:29.667 { 00:18:29.667 "name": "BaseBdev1", 00:18:29.667 "uuid": "625957da-ec0f-4427-b104-94f5456f9b97", 00:18:29.667 "is_configured": true, 00:18:29.667 "data_offset": 0, 00:18:29.667 "data_size": 65536 00:18:29.667 }, 00:18:29.667 { 00:18:29.667 "name": "BaseBdev2", 00:18:29.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.667 "is_configured": false, 00:18:29.667 "data_offset": 0, 00:18:29.667 "data_size": 0 00:18:29.667 }, 00:18:29.667 { 00:18:29.667 "name": "BaseBdev3", 00:18:29.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.667 "is_configured": false, 00:18:29.667 "data_offset": 0, 00:18:29.667 "data_size": 0 00:18:29.667 }, 00:18:29.667 { 00:18:29.667 "name": "BaseBdev4", 00:18:29.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.668 "is_configured": false, 00:18:29.668 "data_offset": 0, 00:18:29.668 "data_size": 0 00:18:29.668 } 00:18:29.668 ] 00:18:29.668 }' 00:18:29.668 21:00:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.668 21:00:57 -- common/autotest_common.sh@10 -- # set +x 00:18:30.236 21:00:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.495 [2024-06-09 21:00:58.454506] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.495 [2024-06-09 21:00:58.454546] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:30.495 21:00:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:30.495 21:00:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:30.495 [2024-06-09 21:00:58.654584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.495 [2024-06-09 21:00:58.656539] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:30.495 [2024-06-09 21:00:58.656617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:30.495 [2024-06-09 21:00:58.656628] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:30.495 [2024-06-09 21:00:58.656654] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:30.495 [2024-06-09 21:00:58.656662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:30.495 [2024-06-09 21:00:58.656680] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:30.495 21:00:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:30.495 21:00:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:30.495 21:00:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:30.495 21:00:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.754 "name": "Existed_Raid", 00:18:30.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.754 "strip_size_kb": 0, 00:18:30.754 "state": "configuring", 00:18:30.754 "raid_level": "raid1", 00:18:30.754 "superblock": false, 00:18:30.754 "num_base_bdevs": 4, 00:18:30.754 "num_base_bdevs_discovered": 1, 00:18:30.754 "num_base_bdevs_operational": 4, 00:18:30.754 "base_bdevs_list": [ 00:18:30.754 { 00:18:30.754 "name": "BaseBdev1", 00:18:30.754 "uuid": "625957da-ec0f-4427-b104-94f5456f9b97", 00:18:30.754 "is_configured": true, 00:18:30.754 "data_offset": 0, 00:18:30.754 "data_size": 65536 00:18:30.754 }, 00:18:30.754 { 00:18:30.754 "name": "BaseBdev2", 00:18:30.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.754 "is_configured": false, 00:18:30.754 "data_offset": 0, 00:18:30.754 "data_size": 0 00:18:30.754 }, 00:18:30.754 { 00:18:30.754 "name": "BaseBdev3", 00:18:30.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.754 "is_configured": false, 00:18:30.754 "data_offset": 0, 00:18:30.754 "data_size": 0 00:18:30.754 }, 00:18:30.754 { 00:18:30.754 "name": "BaseBdev4", 00:18:30.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.754 "is_configured": false, 00:18:30.754 "data_offset": 0, 00:18:30.754 "data_size": 0 00:18:30.754 } 00:18:30.754 ] 00:18:30.754 }' 00:18:30.754 21:00:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.754 21:00:58 -- common/autotest_common.sh@10 -- # set +x 00:18:31.690 21:00:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:31.690 [2024-06-09 21:00:59.818316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.690 BaseBdev2 00:18:31.690 21:00:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:31.690 21:00:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:31.690 21:00:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:31.690 21:00:59 -- common/autotest_common.sh@889 -- # local i 00:18:31.690 21:00:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:31.690 21:00:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:31.690 21:00:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.949 21:01:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.209 [ 00:18:32.209 { 00:18:32.209 "name": "BaseBdev2", 00:18:32.209 "aliases": [ 00:18:32.209 "7fc4e123-5789-4a95-a347-1f4abda23de8" 00:18:32.209 ], 00:18:32.209 "product_name": "Malloc disk", 00:18:32.209 "block_size": 512, 00:18:32.209 "num_blocks": 65536, 00:18:32.209 "uuid": "7fc4e123-5789-4a95-a347-1f4abda23de8", 00:18:32.209 "assigned_rate_limits": { 00:18:32.209 "rw_ios_per_sec": 0, 00:18:32.209 "rw_mbytes_per_sec": 0, 00:18:32.209 "r_mbytes_per_sec": 0, 00:18:32.209 "w_mbytes_per_sec": 0 00:18:32.209 }, 00:18:32.209 "claimed": true, 00:18:32.209 "claim_type": "exclusive_write", 00:18:32.209 "zoned": false, 00:18:32.209 "supported_io_types": { 00:18:32.209 "read": true, 00:18:32.209 "write": true, 00:18:32.209 "unmap": true, 00:18:32.209 "write_zeroes": true, 00:18:32.209 "flush": true, 00:18:32.209 "reset": true, 00:18:32.209 "compare": false, 00:18:32.209 "compare_and_write": false, 00:18:32.209 "abort": true, 00:18:32.209 "nvme_admin": false, 00:18:32.209 "nvme_io": false 00:18:32.209 }, 00:18:32.209 "memory_domains": [ 00:18:32.209 { 00:18:32.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.209 "dma_device_type": 2 00:18:32.209 } 00:18:32.209 ], 00:18:32.209 "driver_specific": {} 00:18:32.209 } 00:18:32.209 ] 00:18:32.209 21:01:00 -- common/autotest_common.sh@895 -- # return 0 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.209 21:01:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.467 21:01:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.467 "name": "Existed_Raid", 00:18:32.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.467 "strip_size_kb": 0, 00:18:32.467 "state": "configuring", 00:18:32.467 "raid_level": "raid1", 00:18:32.467 "superblock": false, 00:18:32.467 "num_base_bdevs": 4, 00:18:32.467 "num_base_bdevs_discovered": 2, 00:18:32.467 "num_base_bdevs_operational": 4, 00:18:32.467 "base_bdevs_list": [ 00:18:32.467 { 00:18:32.467 "name": "BaseBdev1", 00:18:32.467 "uuid": "625957da-ec0f-4427-b104-94f5456f9b97", 00:18:32.467 "is_configured": true, 00:18:32.467 "data_offset": 0, 00:18:32.467 "data_size": 65536 00:18:32.467 }, 00:18:32.467 { 00:18:32.468 "name": "BaseBdev2", 00:18:32.468 "uuid": "7fc4e123-5789-4a95-a347-1f4abda23de8", 00:18:32.468 "is_configured": true, 00:18:32.468 "data_offset": 0, 00:18:32.468 "data_size": 65536 00:18:32.468 }, 00:18:32.468 { 00:18:32.468 "name": "BaseBdev3", 00:18:32.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.468 "is_configured": false, 00:18:32.468 "data_offset": 0, 00:18:32.468 "data_size": 0 00:18:32.468 }, 00:18:32.468 { 00:18:32.468 "name": "BaseBdev4", 00:18:32.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.468 "is_configured": false, 00:18:32.468 "data_offset": 0, 00:18:32.468 "data_size": 0 00:18:32.468 } 00:18:32.468 ] 00:18:32.468 }' 00:18:32.468 21:01:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.468 21:01:00 -- common/autotest_common.sh@10 -- # set +x 00:18:33.036 21:01:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:33.295 [2024-06-09 21:01:01.330625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.295 BaseBdev3 00:18:33.295 21:01:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:33.295 21:01:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:33.295 21:01:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:33.295 21:01:01 -- common/autotest_common.sh@889 -- # local i 00:18:33.295 21:01:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:33.295 21:01:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:33.295 21:01:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.554 21:01:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:33.812 [ 00:18:33.812 { 00:18:33.812 "name": "BaseBdev3", 00:18:33.812 "aliases": [ 00:18:33.812 "5ec6a51b-cf94-4106-be76-c575c58b1cc6" 00:18:33.812 ], 00:18:33.812 "product_name": "Malloc disk", 00:18:33.812 "block_size": 512, 00:18:33.812 "num_blocks": 65536, 00:18:33.812 "uuid": "5ec6a51b-cf94-4106-be76-c575c58b1cc6", 00:18:33.812 "assigned_rate_limits": { 00:18:33.812 "rw_ios_per_sec": 0, 00:18:33.812 "rw_mbytes_per_sec": 0, 00:18:33.812 "r_mbytes_per_sec": 0, 00:18:33.812 "w_mbytes_per_sec": 0 00:18:33.812 }, 00:18:33.812 "claimed": true, 00:18:33.812 "claim_type": "exclusive_write", 00:18:33.812 "zoned": false, 00:18:33.812 "supported_io_types": { 00:18:33.812 "read": true, 00:18:33.812 "write": true, 00:18:33.812 "unmap": true, 00:18:33.812 "write_zeroes": true, 00:18:33.812 "flush": true, 00:18:33.812 "reset": true, 00:18:33.812 "compare": false, 00:18:33.812 "compare_and_write": false, 00:18:33.812 "abort": true, 00:18:33.812 "nvme_admin": false, 00:18:33.812 "nvme_io": false 00:18:33.812 }, 00:18:33.812 "memory_domains": [ 00:18:33.812 { 00:18:33.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.812 "dma_device_type": 2 00:18:33.812 } 00:18:33.812 ], 00:18:33.812 "driver_specific": {} 00:18:33.812 } 00:18:33.812 ] 00:18:33.812 21:01:01 -- common/autotest_common.sh@895 -- # return 0 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.812 21:01:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.813 21:01:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.813 21:01:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.813 21:01:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.071 21:01:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.071 "name": "Existed_Raid", 00:18:34.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.071 "strip_size_kb": 0, 00:18:34.071 "state": "configuring", 00:18:34.071 "raid_level": "raid1", 00:18:34.071 "superblock": false, 00:18:34.071 "num_base_bdevs": 4, 00:18:34.071 "num_base_bdevs_discovered": 3, 00:18:34.071 "num_base_bdevs_operational": 4, 00:18:34.071 "base_bdevs_list": [ 00:18:34.071 { 00:18:34.071 "name": "BaseBdev1", 00:18:34.071 "uuid": "625957da-ec0f-4427-b104-94f5456f9b97", 00:18:34.071 "is_configured": true, 00:18:34.071 "data_offset": 0, 00:18:34.071 "data_size": 65536 00:18:34.071 }, 00:18:34.071 { 00:18:34.071 "name": "BaseBdev2", 00:18:34.071 "uuid": "7fc4e123-5789-4a95-a347-1f4abda23de8", 00:18:34.071 "is_configured": true, 00:18:34.071 "data_offset": 0, 00:18:34.071 "data_size": 65536 00:18:34.071 }, 00:18:34.071 { 00:18:34.071 "name": "BaseBdev3", 00:18:34.071 "uuid": "5ec6a51b-cf94-4106-be76-c575c58b1cc6", 00:18:34.071 "is_configured": true, 00:18:34.071 "data_offset": 0, 00:18:34.071 "data_size": 65536 00:18:34.071 }, 00:18:34.071 { 00:18:34.071 "name": "BaseBdev4", 00:18:34.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.071 "is_configured": false, 00:18:34.071 "data_offset": 0, 00:18:34.071 "data_size": 0 00:18:34.071 } 00:18:34.071 ] 00:18:34.071 }' 00:18:34.071 21:01:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.071 21:01:02 -- common/autotest_common.sh@10 -- # set +x 00:18:34.636 21:01:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:34.894 [2024-06-09 21:01:02.921772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:34.894 [2024-06-09 21:01:02.921850] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:34.894 [2024-06-09 21:01:02.921861] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:34.894 [2024-06-09 21:01:02.922047] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:34.894 [2024-06-09 21:01:02.922443] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:34.894 [2024-06-09 21:01:02.922469] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:34.894 [2024-06-09 21:01:02.922757] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.894 BaseBdev4 00:18:34.894 21:01:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:34.894 21:01:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:34.894 21:01:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:34.894 21:01:02 -- common/autotest_common.sh@889 -- # local i 00:18:34.894 21:01:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:34.894 21:01:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:34.894 21:01:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.152 21:01:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:35.152 [ 00:18:35.152 { 00:18:35.152 "name": "BaseBdev4", 00:18:35.152 "aliases": [ 00:18:35.152 "2f6b1326-a73a-47bb-8805-3931b264f458" 00:18:35.152 ], 00:18:35.152 "product_name": "Malloc disk", 00:18:35.152 "block_size": 512, 00:18:35.152 "num_blocks": 65536, 00:18:35.152 "uuid": "2f6b1326-a73a-47bb-8805-3931b264f458", 00:18:35.152 "assigned_rate_limits": { 00:18:35.152 "rw_ios_per_sec": 0, 00:18:35.152 "rw_mbytes_per_sec": 0, 00:18:35.152 "r_mbytes_per_sec": 0, 00:18:35.152 "w_mbytes_per_sec": 0 00:18:35.152 }, 00:18:35.152 "claimed": true, 00:18:35.152 "claim_type": "exclusive_write", 00:18:35.152 "zoned": false, 00:18:35.152 "supported_io_types": { 00:18:35.152 "read": true, 00:18:35.152 "write": true, 00:18:35.152 "unmap": true, 00:18:35.152 "write_zeroes": true, 00:18:35.152 "flush": true, 00:18:35.152 "reset": true, 00:18:35.152 "compare": false, 00:18:35.152 "compare_and_write": false, 00:18:35.152 "abort": true, 00:18:35.152 "nvme_admin": false, 00:18:35.152 "nvme_io": false 00:18:35.152 }, 00:18:35.152 "memory_domains": [ 00:18:35.152 { 00:18:35.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.152 "dma_device_type": 2 00:18:35.152 } 00:18:35.152 ], 00:18:35.152 "driver_specific": {} 00:18:35.152 } 00:18:35.152 ] 00:18:35.410 21:01:03 -- common/autotest_common.sh@895 -- # return 0 00:18:35.410 21:01:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:35.410 21:01:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.411 "name": "Existed_Raid", 00:18:35.411 "uuid": "658d5a5d-3c80-4170-94c0-3bd62c2fc7ef", 00:18:35.411 "strip_size_kb": 0, 00:18:35.411 "state": "online", 00:18:35.411 "raid_level": "raid1", 00:18:35.411 "superblock": false, 00:18:35.411 "num_base_bdevs": 4, 00:18:35.411 "num_base_bdevs_discovered": 4, 00:18:35.411 "num_base_bdevs_operational": 4, 00:18:35.411 "base_bdevs_list": [ 00:18:35.411 { 00:18:35.411 "name": "BaseBdev1", 00:18:35.411 "uuid": "625957da-ec0f-4427-b104-94f5456f9b97", 00:18:35.411 "is_configured": true, 00:18:35.411 "data_offset": 0, 00:18:35.411 "data_size": 65536 00:18:35.411 }, 00:18:35.411 { 00:18:35.411 "name": "BaseBdev2", 00:18:35.411 "uuid": "7fc4e123-5789-4a95-a347-1f4abda23de8", 00:18:35.411 "is_configured": true, 00:18:35.411 "data_offset": 0, 00:18:35.411 "data_size": 65536 00:18:35.411 }, 00:18:35.411 { 00:18:35.411 "name": "BaseBdev3", 00:18:35.411 "uuid": "5ec6a51b-cf94-4106-be76-c575c58b1cc6", 00:18:35.411 "is_configured": true, 00:18:35.411 "data_offset": 0, 00:18:35.411 "data_size": 65536 00:18:35.411 }, 00:18:35.411 { 00:18:35.411 "name": "BaseBdev4", 00:18:35.411 "uuid": "2f6b1326-a73a-47bb-8805-3931b264f458", 00:18:35.411 "is_configured": true, 00:18:35.411 "data_offset": 0, 00:18:35.411 "data_size": 65536 00:18:35.411 } 00:18:35.411 ] 00:18:35.411 }' 00:18:35.411 21:01:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.411 21:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:36.348 [2024-06-09 21:01:04.381393] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.348 21:01:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.614 21:01:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.614 "name": "Existed_Raid", 00:18:36.614 "uuid": "658d5a5d-3c80-4170-94c0-3bd62c2fc7ef", 00:18:36.614 "strip_size_kb": 0, 00:18:36.614 "state": "online", 00:18:36.614 "raid_level": "raid1", 00:18:36.614 "superblock": false, 00:18:36.614 "num_base_bdevs": 4, 00:18:36.614 "num_base_bdevs_discovered": 3, 00:18:36.614 "num_base_bdevs_operational": 3, 00:18:36.614 "base_bdevs_list": [ 00:18:36.614 { 00:18:36.614 "name": null, 00:18:36.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.614 "is_configured": false, 00:18:36.614 "data_offset": 0, 00:18:36.614 "data_size": 65536 00:18:36.614 }, 00:18:36.614 { 00:18:36.614 "name": "BaseBdev2", 00:18:36.614 "uuid": "7fc4e123-5789-4a95-a347-1f4abda23de8", 00:18:36.614 "is_configured": true, 00:18:36.614 "data_offset": 0, 00:18:36.614 "data_size": 65536 00:18:36.614 }, 00:18:36.614 { 00:18:36.614 "name": "BaseBdev3", 00:18:36.615 "uuid": "5ec6a51b-cf94-4106-be76-c575c58b1cc6", 00:18:36.615 "is_configured": true, 00:18:36.615 "data_offset": 0, 00:18:36.615 "data_size": 65536 00:18:36.615 }, 00:18:36.615 { 00:18:36.615 "name": "BaseBdev4", 00:18:36.615 "uuid": "2f6b1326-a73a-47bb-8805-3931b264f458", 00:18:36.615 "is_configured": true, 00:18:36.615 "data_offset": 0, 00:18:36.615 "data_size": 65536 00:18:36.615 } 00:18:36.615 ] 00:18:36.615 }' 00:18:36.615 21:01:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.615 21:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:37.181 21:01:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:37.181 21:01:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:37.181 21:01:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.181 21:01:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:37.440 21:01:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:37.440 21:01:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:37.440 21:01:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:37.698 [2024-06-09 21:01:05.745116] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:37.699 21:01:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:37.699 21:01:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:37.699 21:01:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.699 21:01:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:37.957 21:01:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:37.957 21:01:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:37.957 21:01:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:38.216 [2024-06-09 21:01:06.259934] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:38.216 21:01:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:38.216 21:01:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:38.216 21:01:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.216 21:01:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:38.474 21:01:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:38.474 21:01:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:38.474 21:01:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:38.732 [2024-06-09 21:01:06.788780] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:38.732 [2024-06-09 21:01:06.788818] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.732 [2024-06-09 21:01:06.788900] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.732 [2024-06-09 21:01:06.853397] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.732 [2024-06-09 21:01:06.853434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:38.732 21:01:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:38.732 21:01:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:38.732 21:01:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.732 21:01:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:38.989 21:01:07 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:38.989 21:01:07 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:38.989 21:01:07 -- bdev/bdev_raid.sh@287 -- # killprocess 120228 00:18:38.989 21:01:07 -- common/autotest_common.sh@926 -- # '[' -z 120228 ']' 00:18:38.989 21:01:07 -- common/autotest_common.sh@930 -- # kill -0 120228 00:18:38.989 21:01:07 -- common/autotest_common.sh@931 -- # uname 00:18:38.989 21:01:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:38.989 21:01:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120228 00:18:38.989 21:01:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:38.989 21:01:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:38.989 21:01:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120228' 00:18:38.989 killing process with pid 120228 00:18:38.989 21:01:07 -- common/autotest_common.sh@945 -- # kill 120228 00:18:38.989 [2024-06-09 21:01:07.135792] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.989 [2024-06-09 21:01:07.135906] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.989 21:01:07 -- common/autotest_common.sh@950 -- # wait 120228 00:18:39.924 ************************************ 00:18:39.924 END TEST raid_state_function_test 00:18:39.924 ************************************ 00:18:39.924 21:01:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:39.924 00:18:39.924 real 0m13.800s 00:18:39.924 user 0m24.749s 00:18:39.924 sys 0m1.561s 00:18:39.924 21:01:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.924 21:01:08 -- common/autotest_common.sh@10 -- # set +x 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:40.183 21:01:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:40.183 21:01:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.183 21:01:08 -- common/autotest_common.sh@10 -- # set +x 00:18:40.183 ************************************ 00:18:40.183 START TEST raid_state_function_test_sb 00:18:40.183 ************************************ 00:18:40.183 21:01:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=120668 00:18:40.183 Process raid pid: 120668 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120668' 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:40.183 21:01:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120668 /var/tmp/spdk-raid.sock 00:18:40.183 21:01:08 -- common/autotest_common.sh@819 -- # '[' -z 120668 ']' 00:18:40.183 21:01:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:40.183 21:01:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:40.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:40.183 21:01:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:40.183 21:01:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:40.183 21:01:08 -- common/autotest_common.sh@10 -- # set +x 00:18:40.183 [2024-06-09 21:01:08.195636] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:40.183 [2024-06-09 21:01:08.195837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.183 [2024-06-09 21:01:08.353254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.442 [2024-06-09 21:01:08.542548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.700 [2024-06-09 21:01:08.725772] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.958 21:01:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:40.958 21:01:09 -- common/autotest_common.sh@852 -- # return 0 00:18:40.959 21:01:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:41.217 [2024-06-09 21:01:09.351875] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.217 [2024-06-09 21:01:09.351983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.217 [2024-06-09 21:01:09.352013] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.217 [2024-06-09 21:01:09.352033] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.217 [2024-06-09 21:01:09.352041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.217 [2024-06-09 21:01:09.352076] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.217 [2024-06-09 21:01:09.352085] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:41.217 [2024-06-09 21:01:09.352105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.217 21:01:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.475 21:01:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.475 "name": "Existed_Raid", 00:18:41.475 "uuid": "7aaf0f0d-e2ab-40c8-b9a9-cebb59e3318f", 00:18:41.475 "strip_size_kb": 0, 00:18:41.475 "state": "configuring", 00:18:41.475 "raid_level": "raid1", 00:18:41.475 "superblock": true, 00:18:41.475 "num_base_bdevs": 4, 00:18:41.475 "num_base_bdevs_discovered": 0, 00:18:41.475 "num_base_bdevs_operational": 4, 00:18:41.475 "base_bdevs_list": [ 00:18:41.475 { 00:18:41.475 "name": "BaseBdev1", 00:18:41.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.475 "is_configured": false, 00:18:41.475 "data_offset": 0, 00:18:41.475 "data_size": 0 00:18:41.475 }, 00:18:41.475 { 00:18:41.475 "name": "BaseBdev2", 00:18:41.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.475 "is_configured": false, 00:18:41.475 "data_offset": 0, 00:18:41.475 "data_size": 0 00:18:41.475 }, 00:18:41.475 { 00:18:41.475 "name": "BaseBdev3", 00:18:41.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.475 "is_configured": false, 00:18:41.475 "data_offset": 0, 00:18:41.475 "data_size": 0 00:18:41.475 }, 00:18:41.475 { 00:18:41.475 "name": "BaseBdev4", 00:18:41.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.475 "is_configured": false, 00:18:41.475 "data_offset": 0, 00:18:41.475 "data_size": 0 00:18:41.475 } 00:18:41.475 ] 00:18:41.475 }' 00:18:41.475 21:01:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.475 21:01:09 -- common/autotest_common.sh@10 -- # set +x 00:18:42.042 21:01:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:42.300 [2024-06-09 21:01:10.323927] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.300 [2024-06-09 21:01:10.323984] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:42.300 21:01:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:42.558 [2024-06-09 21:01:10.572041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:42.558 [2024-06-09 21:01:10.572110] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:42.558 [2024-06-09 21:01:10.572138] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:42.558 [2024-06-09 21:01:10.572164] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:42.558 [2024-06-09 21:01:10.572172] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:42.558 [2024-06-09 21:01:10.572206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:42.558 [2024-06-09 21:01:10.572214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:42.558 [2024-06-09 21:01:10.572236] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:42.558 21:01:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:42.816 [2024-06-09 21:01:10.811280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:42.816 BaseBdev1 00:18:42.816 21:01:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:42.816 21:01:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:42.816 21:01:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:42.816 21:01:10 -- common/autotest_common.sh@889 -- # local i 00:18:42.816 21:01:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:42.816 21:01:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:42.816 21:01:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.074 21:01:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:43.333 [ 00:18:43.333 { 00:18:43.333 "name": "BaseBdev1", 00:18:43.333 "aliases": [ 00:18:43.333 "4022f1da-730e-4254-b922-a0cb848bab5c" 00:18:43.333 ], 00:18:43.333 "product_name": "Malloc disk", 00:18:43.333 "block_size": 512, 00:18:43.333 "num_blocks": 65536, 00:18:43.333 "uuid": "4022f1da-730e-4254-b922-a0cb848bab5c", 00:18:43.333 "assigned_rate_limits": { 00:18:43.333 "rw_ios_per_sec": 0, 00:18:43.333 "rw_mbytes_per_sec": 0, 00:18:43.333 "r_mbytes_per_sec": 0, 00:18:43.333 "w_mbytes_per_sec": 0 00:18:43.333 }, 00:18:43.333 "claimed": true, 00:18:43.333 "claim_type": "exclusive_write", 00:18:43.333 "zoned": false, 00:18:43.333 "supported_io_types": { 00:18:43.333 "read": true, 00:18:43.333 "write": true, 00:18:43.333 "unmap": true, 00:18:43.333 "write_zeroes": true, 00:18:43.333 "flush": true, 00:18:43.333 "reset": true, 00:18:43.333 "compare": false, 00:18:43.333 "compare_and_write": false, 00:18:43.333 "abort": true, 00:18:43.333 "nvme_admin": false, 00:18:43.333 "nvme_io": false 00:18:43.333 }, 00:18:43.333 "memory_domains": [ 00:18:43.333 { 00:18:43.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.333 "dma_device_type": 2 00:18:43.333 } 00:18:43.333 ], 00:18:43.333 "driver_specific": {} 00:18:43.333 } 00:18:43.333 ] 00:18:43.333 21:01:11 -- common/autotest_common.sh@895 -- # return 0 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.333 21:01:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.592 21:01:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.592 "name": "Existed_Raid", 00:18:43.592 "uuid": "b218abf1-606c-4951-accf-36b672373954", 00:18:43.592 "strip_size_kb": 0, 00:18:43.592 "state": "configuring", 00:18:43.592 "raid_level": "raid1", 00:18:43.592 "superblock": true, 00:18:43.592 "num_base_bdevs": 4, 00:18:43.592 "num_base_bdevs_discovered": 1, 00:18:43.592 "num_base_bdevs_operational": 4, 00:18:43.592 "base_bdevs_list": [ 00:18:43.592 { 00:18:43.592 "name": "BaseBdev1", 00:18:43.592 "uuid": "4022f1da-730e-4254-b922-a0cb848bab5c", 00:18:43.592 "is_configured": true, 00:18:43.592 "data_offset": 2048, 00:18:43.592 "data_size": 63488 00:18:43.592 }, 00:18:43.592 { 00:18:43.592 "name": "BaseBdev2", 00:18:43.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.592 "is_configured": false, 00:18:43.592 "data_offset": 0, 00:18:43.592 "data_size": 0 00:18:43.592 }, 00:18:43.592 { 00:18:43.592 "name": "BaseBdev3", 00:18:43.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.592 "is_configured": false, 00:18:43.592 "data_offset": 0, 00:18:43.592 "data_size": 0 00:18:43.592 }, 00:18:43.592 { 00:18:43.592 "name": "BaseBdev4", 00:18:43.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.592 "is_configured": false, 00:18:43.592 "data_offset": 0, 00:18:43.592 "data_size": 0 00:18:43.592 } 00:18:43.592 ] 00:18:43.592 }' 00:18:43.592 21:01:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.592 21:01:11 -- common/autotest_common.sh@10 -- # set +x 00:18:44.158 21:01:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.158 [2024-06-09 21:01:12.323629] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.158 [2024-06-09 21:01:12.323704] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:44.416 21:01:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:44.416 21:01:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:44.674 21:01:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:44.932 BaseBdev1 00:18:44.932 21:01:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:44.932 21:01:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:44.932 21:01:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:44.932 21:01:12 -- common/autotest_common.sh@889 -- # local i 00:18:44.932 21:01:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:44.932 21:01:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:44.932 21:01:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:45.190 21:01:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:45.448 [ 00:18:45.448 { 00:18:45.448 "name": "BaseBdev1", 00:18:45.448 "aliases": [ 00:18:45.448 "91bff64a-d77c-4071-9429-9c2d06f48296" 00:18:45.448 ], 00:18:45.448 "product_name": "Malloc disk", 00:18:45.448 "block_size": 512, 00:18:45.448 "num_blocks": 65536, 00:18:45.448 "uuid": "91bff64a-d77c-4071-9429-9c2d06f48296", 00:18:45.448 "assigned_rate_limits": { 00:18:45.448 "rw_ios_per_sec": 0, 00:18:45.448 "rw_mbytes_per_sec": 0, 00:18:45.448 "r_mbytes_per_sec": 0, 00:18:45.448 "w_mbytes_per_sec": 0 00:18:45.448 }, 00:18:45.448 "claimed": false, 00:18:45.448 "zoned": false, 00:18:45.448 "supported_io_types": { 00:18:45.448 "read": true, 00:18:45.448 "write": true, 00:18:45.449 "unmap": true, 00:18:45.449 "write_zeroes": true, 00:18:45.449 "flush": true, 00:18:45.449 "reset": true, 00:18:45.449 "compare": false, 00:18:45.449 "compare_and_write": false, 00:18:45.449 "abort": true, 00:18:45.449 "nvme_admin": false, 00:18:45.449 "nvme_io": false 00:18:45.449 }, 00:18:45.449 "memory_domains": [ 00:18:45.449 { 00:18:45.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.449 "dma_device_type": 2 00:18:45.449 } 00:18:45.449 ], 00:18:45.449 "driver_specific": {} 00:18:45.449 } 00:18:45.449 ] 00:18:45.449 21:01:13 -- common/autotest_common.sh@895 -- # return 0 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:45.449 [2024-06-09 21:01:13.569696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.449 [2024-06-09 21:01:13.571633] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:45.449 [2024-06-09 21:01:13.571728] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:45.449 [2024-06-09 21:01:13.571756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:45.449 [2024-06-09 21:01:13.571781] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:45.449 [2024-06-09 21:01:13.571790] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:45.449 [2024-06-09 21:01:13.571806] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.449 21:01:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.707 21:01:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.707 "name": "Existed_Raid", 00:18:45.707 "uuid": "25f6cf16-c550-4b22-b819-10f01048fd68", 00:18:45.707 "strip_size_kb": 0, 00:18:45.707 "state": "configuring", 00:18:45.707 "raid_level": "raid1", 00:18:45.707 "superblock": true, 00:18:45.707 "num_base_bdevs": 4, 00:18:45.707 "num_base_bdevs_discovered": 1, 00:18:45.707 "num_base_bdevs_operational": 4, 00:18:45.707 "base_bdevs_list": [ 00:18:45.707 { 00:18:45.707 "name": "BaseBdev1", 00:18:45.707 "uuid": "91bff64a-d77c-4071-9429-9c2d06f48296", 00:18:45.707 "is_configured": true, 00:18:45.707 "data_offset": 2048, 00:18:45.707 "data_size": 63488 00:18:45.707 }, 00:18:45.707 { 00:18:45.707 "name": "BaseBdev2", 00:18:45.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.707 "is_configured": false, 00:18:45.707 "data_offset": 0, 00:18:45.707 "data_size": 0 00:18:45.707 }, 00:18:45.707 { 00:18:45.707 "name": "BaseBdev3", 00:18:45.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.707 "is_configured": false, 00:18:45.707 "data_offset": 0, 00:18:45.707 "data_size": 0 00:18:45.707 }, 00:18:45.707 { 00:18:45.707 "name": "BaseBdev4", 00:18:45.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.707 "is_configured": false, 00:18:45.707 "data_offset": 0, 00:18:45.707 "data_size": 0 00:18:45.707 } 00:18:45.707 ] 00:18:45.707 }' 00:18:45.707 21:01:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.707 21:01:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.273 21:01:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:46.839 [2024-06-09 21:01:14.762655] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.839 BaseBdev2 00:18:46.839 21:01:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:46.839 21:01:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:46.839 21:01:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:46.839 21:01:14 -- common/autotest_common.sh@889 -- # local i 00:18:46.839 21:01:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:46.839 21:01:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:46.839 21:01:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.839 21:01:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:47.097 [ 00:18:47.097 { 00:18:47.097 "name": "BaseBdev2", 00:18:47.097 "aliases": [ 00:18:47.097 "2f8562cd-129c-429d-b119-9e05dabde638" 00:18:47.097 ], 00:18:47.097 "product_name": "Malloc disk", 00:18:47.097 "block_size": 512, 00:18:47.097 "num_blocks": 65536, 00:18:47.097 "uuid": "2f8562cd-129c-429d-b119-9e05dabde638", 00:18:47.098 "assigned_rate_limits": { 00:18:47.098 "rw_ios_per_sec": 0, 00:18:47.098 "rw_mbytes_per_sec": 0, 00:18:47.098 "r_mbytes_per_sec": 0, 00:18:47.098 "w_mbytes_per_sec": 0 00:18:47.098 }, 00:18:47.098 "claimed": true, 00:18:47.098 "claim_type": "exclusive_write", 00:18:47.098 "zoned": false, 00:18:47.098 "supported_io_types": { 00:18:47.098 "read": true, 00:18:47.098 "write": true, 00:18:47.098 "unmap": true, 00:18:47.098 "write_zeroes": true, 00:18:47.098 "flush": true, 00:18:47.098 "reset": true, 00:18:47.098 "compare": false, 00:18:47.098 "compare_and_write": false, 00:18:47.098 "abort": true, 00:18:47.098 "nvme_admin": false, 00:18:47.098 "nvme_io": false 00:18:47.098 }, 00:18:47.098 "memory_domains": [ 00:18:47.098 { 00:18:47.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.098 "dma_device_type": 2 00:18:47.098 } 00:18:47.098 ], 00:18:47.098 "driver_specific": {} 00:18:47.098 } 00:18:47.098 ] 00:18:47.098 21:01:15 -- common/autotest_common.sh@895 -- # return 0 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.098 21:01:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.357 21:01:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.357 "name": "Existed_Raid", 00:18:47.357 "uuid": "25f6cf16-c550-4b22-b819-10f01048fd68", 00:18:47.357 "strip_size_kb": 0, 00:18:47.357 "state": "configuring", 00:18:47.357 "raid_level": "raid1", 00:18:47.357 "superblock": true, 00:18:47.357 "num_base_bdevs": 4, 00:18:47.357 "num_base_bdevs_discovered": 2, 00:18:47.357 "num_base_bdevs_operational": 4, 00:18:47.357 "base_bdevs_list": [ 00:18:47.357 { 00:18:47.357 "name": "BaseBdev1", 00:18:47.357 "uuid": "91bff64a-d77c-4071-9429-9c2d06f48296", 00:18:47.357 "is_configured": true, 00:18:47.357 "data_offset": 2048, 00:18:47.357 "data_size": 63488 00:18:47.357 }, 00:18:47.357 { 00:18:47.357 "name": "BaseBdev2", 00:18:47.357 "uuid": "2f8562cd-129c-429d-b119-9e05dabde638", 00:18:47.357 "is_configured": true, 00:18:47.357 "data_offset": 2048, 00:18:47.357 "data_size": 63488 00:18:47.357 }, 00:18:47.357 { 00:18:47.357 "name": "BaseBdev3", 00:18:47.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.357 "is_configured": false, 00:18:47.357 "data_offset": 0, 00:18:47.357 "data_size": 0 00:18:47.357 }, 00:18:47.357 { 00:18:47.357 "name": "BaseBdev4", 00:18:47.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.357 "is_configured": false, 00:18:47.357 "data_offset": 0, 00:18:47.357 "data_size": 0 00:18:47.357 } 00:18:47.357 ] 00:18:47.357 }' 00:18:47.357 21:01:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.357 21:01:15 -- common/autotest_common.sh@10 -- # set +x 00:18:47.965 21:01:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:48.224 [2024-06-09 21:01:16.198920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:48.224 BaseBdev3 00:18:48.224 21:01:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:48.224 21:01:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:48.224 21:01:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:48.224 21:01:16 -- common/autotest_common.sh@889 -- # local i 00:18:48.224 21:01:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:48.224 21:01:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:48.224 21:01:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:48.483 21:01:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:48.742 [ 00:18:48.742 { 00:18:48.742 "name": "BaseBdev3", 00:18:48.742 "aliases": [ 00:18:48.742 "bb5bd130-8b5e-4ec2-a92e-adc6c799b230" 00:18:48.742 ], 00:18:48.742 "product_name": "Malloc disk", 00:18:48.742 "block_size": 512, 00:18:48.742 "num_blocks": 65536, 00:18:48.742 "uuid": "bb5bd130-8b5e-4ec2-a92e-adc6c799b230", 00:18:48.742 "assigned_rate_limits": { 00:18:48.742 "rw_ios_per_sec": 0, 00:18:48.742 "rw_mbytes_per_sec": 0, 00:18:48.742 "r_mbytes_per_sec": 0, 00:18:48.742 "w_mbytes_per_sec": 0 00:18:48.742 }, 00:18:48.742 "claimed": true, 00:18:48.742 "claim_type": "exclusive_write", 00:18:48.742 "zoned": false, 00:18:48.742 "supported_io_types": { 00:18:48.742 "read": true, 00:18:48.742 "write": true, 00:18:48.742 "unmap": true, 00:18:48.742 "write_zeroes": true, 00:18:48.742 "flush": true, 00:18:48.742 "reset": true, 00:18:48.742 "compare": false, 00:18:48.742 "compare_and_write": false, 00:18:48.742 "abort": true, 00:18:48.742 "nvme_admin": false, 00:18:48.742 "nvme_io": false 00:18:48.742 }, 00:18:48.742 "memory_domains": [ 00:18:48.742 { 00:18:48.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.742 "dma_device_type": 2 00:18:48.742 } 00:18:48.742 ], 00:18:48.742 "driver_specific": {} 00:18:48.742 } 00:18:48.742 ] 00:18:48.742 21:01:16 -- common/autotest_common.sh@895 -- # return 0 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.742 21:01:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.001 21:01:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.001 "name": "Existed_Raid", 00:18:49.001 "uuid": "25f6cf16-c550-4b22-b819-10f01048fd68", 00:18:49.001 "strip_size_kb": 0, 00:18:49.001 "state": "configuring", 00:18:49.001 "raid_level": "raid1", 00:18:49.001 "superblock": true, 00:18:49.001 "num_base_bdevs": 4, 00:18:49.001 "num_base_bdevs_discovered": 3, 00:18:49.001 "num_base_bdevs_operational": 4, 00:18:49.001 "base_bdevs_list": [ 00:18:49.001 { 00:18:49.001 "name": "BaseBdev1", 00:18:49.001 "uuid": "91bff64a-d77c-4071-9429-9c2d06f48296", 00:18:49.001 "is_configured": true, 00:18:49.001 "data_offset": 2048, 00:18:49.001 "data_size": 63488 00:18:49.001 }, 00:18:49.001 { 00:18:49.001 "name": "BaseBdev2", 00:18:49.001 "uuid": "2f8562cd-129c-429d-b119-9e05dabde638", 00:18:49.001 "is_configured": true, 00:18:49.001 "data_offset": 2048, 00:18:49.001 "data_size": 63488 00:18:49.001 }, 00:18:49.001 { 00:18:49.001 "name": "BaseBdev3", 00:18:49.001 "uuid": "bb5bd130-8b5e-4ec2-a92e-adc6c799b230", 00:18:49.001 "is_configured": true, 00:18:49.001 "data_offset": 2048, 00:18:49.001 "data_size": 63488 00:18:49.001 }, 00:18:49.001 { 00:18:49.001 "name": "BaseBdev4", 00:18:49.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.001 "is_configured": false, 00:18:49.001 "data_offset": 0, 00:18:49.001 "data_size": 0 00:18:49.001 } 00:18:49.001 ] 00:18:49.001 }' 00:18:49.001 21:01:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.001 21:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:49.568 21:01:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:49.827 [2024-06-09 21:01:17.805297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:49.827 BaseBdev4 00:18:49.827 [2024-06-09 21:01:17.817134] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:49.827 [2024-06-09 21:01:17.817158] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:49.827 [2024-06-09 21:01:17.817293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:49.827 [2024-06-09 21:01:17.817665] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:49.827 [2024-06-09 21:01:17.817680] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:49.827 21:01:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:49.827 [2024-06-09 21:01:17.817859] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.827 21:01:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:49.827 21:01:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:49.827 21:01:17 -- common/autotest_common.sh@889 -- # local i 00:18:49.827 21:01:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:49.827 21:01:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:49.827 21:01:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:50.086 21:01:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:50.086 [ 00:18:50.086 { 00:18:50.086 "name": "BaseBdev4", 00:18:50.086 "aliases": [ 00:18:50.086 "aab76b27-2a0a-4fa7-9f00-6766dfb74a63" 00:18:50.086 ], 00:18:50.086 "product_name": "Malloc disk", 00:18:50.086 "block_size": 512, 00:18:50.086 "num_blocks": 65536, 00:18:50.086 "uuid": "aab76b27-2a0a-4fa7-9f00-6766dfb74a63", 00:18:50.086 "assigned_rate_limits": { 00:18:50.086 "rw_ios_per_sec": 0, 00:18:50.086 "rw_mbytes_per_sec": 0, 00:18:50.086 "r_mbytes_per_sec": 0, 00:18:50.086 "w_mbytes_per_sec": 0 00:18:50.086 }, 00:18:50.086 "claimed": true, 00:18:50.086 "claim_type": "exclusive_write", 00:18:50.086 "zoned": false, 00:18:50.086 "supported_io_types": { 00:18:50.086 "read": true, 00:18:50.086 "write": true, 00:18:50.086 "unmap": true, 00:18:50.086 "write_zeroes": true, 00:18:50.086 "flush": true, 00:18:50.086 "reset": true, 00:18:50.086 "compare": false, 00:18:50.086 "compare_and_write": false, 00:18:50.086 "abort": true, 00:18:50.086 "nvme_admin": false, 00:18:50.086 "nvme_io": false 00:18:50.086 }, 00:18:50.086 "memory_domains": [ 00:18:50.086 { 00:18:50.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.086 "dma_device_type": 2 00:18:50.086 } 00:18:50.086 ], 00:18:50.086 "driver_specific": {} 00:18:50.086 } 00:18:50.086 ] 00:18:50.086 21:01:18 -- common/autotest_common.sh@895 -- # return 0 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.086 21:01:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.345 21:01:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.345 "name": "Existed_Raid", 00:18:50.345 "uuid": "25f6cf16-c550-4b22-b819-10f01048fd68", 00:18:50.345 "strip_size_kb": 0, 00:18:50.345 "state": "online", 00:18:50.345 "raid_level": "raid1", 00:18:50.345 "superblock": true, 00:18:50.345 "num_base_bdevs": 4, 00:18:50.345 "num_base_bdevs_discovered": 4, 00:18:50.345 "num_base_bdevs_operational": 4, 00:18:50.345 "base_bdevs_list": [ 00:18:50.345 { 00:18:50.345 "name": "BaseBdev1", 00:18:50.345 "uuid": "91bff64a-d77c-4071-9429-9c2d06f48296", 00:18:50.345 "is_configured": true, 00:18:50.345 "data_offset": 2048, 00:18:50.345 "data_size": 63488 00:18:50.345 }, 00:18:50.345 { 00:18:50.345 "name": "BaseBdev2", 00:18:50.345 "uuid": "2f8562cd-129c-429d-b119-9e05dabde638", 00:18:50.345 "is_configured": true, 00:18:50.345 "data_offset": 2048, 00:18:50.345 "data_size": 63488 00:18:50.345 }, 00:18:50.345 { 00:18:50.345 "name": "BaseBdev3", 00:18:50.345 "uuid": "bb5bd130-8b5e-4ec2-a92e-adc6c799b230", 00:18:50.345 "is_configured": true, 00:18:50.345 "data_offset": 2048, 00:18:50.345 "data_size": 63488 00:18:50.345 }, 00:18:50.345 { 00:18:50.345 "name": "BaseBdev4", 00:18:50.345 "uuid": "aab76b27-2a0a-4fa7-9f00-6766dfb74a63", 00:18:50.345 "is_configured": true, 00:18:50.345 "data_offset": 2048, 00:18:50.345 "data_size": 63488 00:18:50.345 } 00:18:50.345 ] 00:18:50.345 }' 00:18:50.345 21:01:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.345 21:01:18 -- common/autotest_common.sh@10 -- # set +x 00:18:50.913 21:01:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:51.172 [2024-06-09 21:01:19.302241] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.430 21:01:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.431 21:01:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.689 21:01:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.689 "name": "Existed_Raid", 00:18:51.689 "uuid": "25f6cf16-c550-4b22-b819-10f01048fd68", 00:18:51.689 "strip_size_kb": 0, 00:18:51.689 "state": "online", 00:18:51.689 "raid_level": "raid1", 00:18:51.689 "superblock": true, 00:18:51.689 "num_base_bdevs": 4, 00:18:51.689 "num_base_bdevs_discovered": 3, 00:18:51.689 "num_base_bdevs_operational": 3, 00:18:51.689 "base_bdevs_list": [ 00:18:51.689 { 00:18:51.689 "name": null, 00:18:51.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.689 "is_configured": false, 00:18:51.689 "data_offset": 2048, 00:18:51.689 "data_size": 63488 00:18:51.689 }, 00:18:51.689 { 00:18:51.689 "name": "BaseBdev2", 00:18:51.689 "uuid": "2f8562cd-129c-429d-b119-9e05dabde638", 00:18:51.689 "is_configured": true, 00:18:51.689 "data_offset": 2048, 00:18:51.689 "data_size": 63488 00:18:51.689 }, 00:18:51.689 { 00:18:51.689 "name": "BaseBdev3", 00:18:51.689 "uuid": "bb5bd130-8b5e-4ec2-a92e-adc6c799b230", 00:18:51.689 "is_configured": true, 00:18:51.689 "data_offset": 2048, 00:18:51.689 "data_size": 63488 00:18:51.689 }, 00:18:51.689 { 00:18:51.689 "name": "BaseBdev4", 00:18:51.689 "uuid": "aab76b27-2a0a-4fa7-9f00-6766dfb74a63", 00:18:51.689 "is_configured": true, 00:18:51.689 "data_offset": 2048, 00:18:51.689 "data_size": 63488 00:18:51.689 } 00:18:51.689 ] 00:18:51.689 }' 00:18:51.689 21:01:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.689 21:01:19 -- common/autotest_common.sh@10 -- # set +x 00:18:52.256 21:01:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:52.256 21:01:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:52.256 21:01:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.256 21:01:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:52.515 21:01:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:52.515 21:01:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:52.515 21:01:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:52.515 [2024-06-09 21:01:20.689501] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:52.773 21:01:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:52.773 21:01:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:52.774 21:01:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.774 21:01:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:53.032 21:01:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:53.032 21:01:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:53.032 21:01:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:53.032 [2024-06-09 21:01:21.186352] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:53.291 21:01:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:53.291 21:01:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:53.291 21:01:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.291 21:01:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:53.291 21:01:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:53.291 21:01:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:53.291 21:01:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:53.549 [2024-06-09 21:01:21.688786] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:53.549 [2024-06-09 21:01:21.688819] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.549 [2024-06-09 21:01:21.688882] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.808 [2024-06-09 21:01:21.752162] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.808 [2024-06-09 21:01:21.752197] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:53.808 21:01:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:53.808 21:01:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:53.808 21:01:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.808 21:01:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:54.072 21:01:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:54.072 21:01:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:54.072 21:01:22 -- bdev/bdev_raid.sh@287 -- # killprocess 120668 00:18:54.072 21:01:22 -- common/autotest_common.sh@926 -- # '[' -z 120668 ']' 00:18:54.072 21:01:22 -- common/autotest_common.sh@930 -- # kill -0 120668 00:18:54.072 21:01:22 -- common/autotest_common.sh@931 -- # uname 00:18:54.072 21:01:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:54.072 21:01:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120668 00:18:54.072 killing process with pid 120668 00:18:54.072 21:01:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:54.072 21:01:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:54.072 21:01:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120668' 00:18:54.072 21:01:22 -- common/autotest_common.sh@945 -- # kill 120668 00:18:54.072 [2024-06-09 21:01:22.038875] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.072 21:01:22 -- common/autotest_common.sh@950 -- # wait 120668 00:18:54.072 [2024-06-09 21:01:22.039015] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:55.005 ************************************ 00:18:55.005 END TEST raid_state_function_test_sb 00:18:55.005 ************************************ 00:18:55.005 21:01:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:55.005 00:18:55.005 real 0m14.842s 00:18:55.005 user 0m26.335s 00:18:55.005 sys 0m1.861s 00:18:55.005 21:01:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:55.005 21:01:22 -- common/autotest_common.sh@10 -- # set +x 00:18:55.005 21:01:23 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:55.006 21:01:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:55.006 21:01:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:55.006 21:01:23 -- common/autotest_common.sh@10 -- # set +x 00:18:55.006 ************************************ 00:18:55.006 START TEST raid_superblock_test 00:18:55.006 ************************************ 00:18:55.006 21:01:23 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@357 -- # raid_pid=121122 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121122 /var/tmp/spdk-raid.sock 00:18:55.006 21:01:23 -- common/autotest_common.sh@819 -- # '[' -z 121122 ']' 00:18:55.006 21:01:23 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:55.006 21:01:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:55.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:55.006 21:01:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:55.006 21:01:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:55.006 21:01:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:55.006 21:01:23 -- common/autotest_common.sh@10 -- # set +x 00:18:55.006 [2024-06-09 21:01:23.100666] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:55.006 [2024-06-09 21:01:23.100854] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121122 ] 00:18:55.264 [2024-06-09 21:01:23.272159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.522 [2024-06-09 21:01:23.515157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.522 [2024-06-09 21:01:23.684538] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.088 21:01:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:56.088 21:01:24 -- common/autotest_common.sh@852 -- # return 0 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:56.088 21:01:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:56.346 malloc1 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:56.346 [2024-06-09 21:01:24.492461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:56.346 [2024-06-09 21:01:24.492574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.346 [2024-06-09 21:01:24.492610] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:56.346 [2024-06-09 21:01:24.492654] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.346 [2024-06-09 21:01:24.495097] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.346 [2024-06-09 21:01:24.495165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:56.346 pt1 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:56.346 21:01:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:56.605 malloc2 00:18:56.605 21:01:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:56.862 [2024-06-09 21:01:24.944311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:56.862 [2024-06-09 21:01:24.944398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.862 [2024-06-09 21:01:24.944439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:56.862 [2024-06-09 21:01:24.944501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.862 [2024-06-09 21:01:24.946753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.862 [2024-06-09 21:01:24.946799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:56.862 pt2 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:56.862 21:01:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:57.120 malloc3 00:18:57.120 21:01:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:57.379 [2024-06-09 21:01:25.365819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:57.379 [2024-06-09 21:01:25.365910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.380 [2024-06-09 21:01:25.365954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:57.380 [2024-06-09 21:01:25.365998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.380 [2024-06-09 21:01:25.368263] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.380 [2024-06-09 21:01:25.368333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:57.380 pt3 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.380 21:01:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:57.638 malloc4 00:18:57.638 21:01:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:57.897 [2024-06-09 21:01:25.852007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:57.897 [2024-06-09 21:01:25.852102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.897 [2024-06-09 21:01:25.852135] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:57.897 [2024-06-09 21:01:25.852177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.897 [2024-06-09 21:01:25.854422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.897 [2024-06-09 21:01:25.854486] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:57.897 pt4 00:18:57.897 21:01:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:57.897 21:01:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:57.897 21:01:25 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:58.156 [2024-06-09 21:01:26.092133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.156 [2024-06-09 21:01:26.094159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.156 [2024-06-09 21:01:26.094257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:58.156 [2024-06-09 21:01:26.094317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:58.156 [2024-06-09 21:01:26.094593] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:58.156 [2024-06-09 21:01:26.094615] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:58.156 [2024-06-09 21:01:26.094769] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:58.156 [2024-06-09 21:01:26.095250] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:58.156 [2024-06-09 21:01:26.095276] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:58.156 [2024-06-09 21:01:26.095465] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.156 21:01:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.425 21:01:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.425 "name": "raid_bdev1", 00:18:58.425 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:18:58.425 "strip_size_kb": 0, 00:18:58.425 "state": "online", 00:18:58.425 "raid_level": "raid1", 00:18:58.425 "superblock": true, 00:18:58.425 "num_base_bdevs": 4, 00:18:58.425 "num_base_bdevs_discovered": 4, 00:18:58.425 "num_base_bdevs_operational": 4, 00:18:58.425 "base_bdevs_list": [ 00:18:58.425 { 00:18:58.425 "name": "pt1", 00:18:58.425 "uuid": "4a589580-9ff4-5e50-8c3c-73ed274b5b33", 00:18:58.425 "is_configured": true, 00:18:58.425 "data_offset": 2048, 00:18:58.425 "data_size": 63488 00:18:58.425 }, 00:18:58.425 { 00:18:58.425 "name": "pt2", 00:18:58.425 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:18:58.425 "is_configured": true, 00:18:58.425 "data_offset": 2048, 00:18:58.425 "data_size": 63488 00:18:58.425 }, 00:18:58.425 { 00:18:58.425 "name": "pt3", 00:18:58.425 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:18:58.425 "is_configured": true, 00:18:58.425 "data_offset": 2048, 00:18:58.425 "data_size": 63488 00:18:58.425 }, 00:18:58.425 { 00:18:58.425 "name": "pt4", 00:18:58.425 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:18:58.425 "is_configured": true, 00:18:58.425 "data_offset": 2048, 00:18:58.425 "data_size": 63488 00:18:58.425 } 00:18:58.425 ] 00:18:58.425 }' 00:18:58.425 21:01:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.425 21:01:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.005 21:01:26 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:59.005 21:01:26 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:59.263 [2024-06-09 21:01:27.256477] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.263 21:01:27 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=399997bf-1392-470a-9493-b8a5a099b8b5 00:18:59.263 21:01:27 -- bdev/bdev_raid.sh@380 -- # '[' -z 399997bf-1392-470a-9493-b8a5a099b8b5 ']' 00:18:59.263 21:01:27 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:59.521 [2024-06-09 21:01:27.528301] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.521 [2024-06-09 21:01:27.528328] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.521 [2024-06-09 21:01:27.528405] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.521 [2024-06-09 21:01:27.528488] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.521 [2024-06-09 21:01:27.528499] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:59.521 21:01:27 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.521 21:01:27 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:59.779 21:01:27 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:59.779 21:01:27 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:59.779 21:01:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.779 21:01:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:00.037 21:01:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.037 21:01:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:00.037 21:01:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.037 21:01:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:00.296 21:01:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:00.296 21:01:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:00.554 21:01:28 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:00.554 21:01:28 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:00.813 21:01:28 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:00.813 21:01:28 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:00.813 21:01:28 -- common/autotest_common.sh@640 -- # local es=0 00:19:00.813 21:01:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:00.813 21:01:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.813 21:01:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:00.813 21:01:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.813 21:01:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:00.813 21:01:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.813 21:01:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:00.813 21:01:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.813 21:01:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:00.813 21:01:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:01.073 [2024-06-09 21:01:28.996538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:01.073 [2024-06-09 21:01:28.998370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:01.073 [2024-06-09 21:01:28.998444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:01.073 [2024-06-09 21:01:28.998483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:01.073 [2024-06-09 21:01:28.998536] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:01.073 [2024-06-09 21:01:28.998620] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:01.073 [2024-06-09 21:01:28.998662] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:01.073 [2024-06-09 21:01:28.998735] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:01.073 [2024-06-09 21:01:28.998762] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.073 [2024-06-09 21:01:28.998773] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:19:01.073 request: 00:19:01.073 { 00:19:01.073 "name": "raid_bdev1", 00:19:01.073 "raid_level": "raid1", 00:19:01.073 "base_bdevs": [ 00:19:01.073 "malloc1", 00:19:01.073 "malloc2", 00:19:01.073 "malloc3", 00:19:01.073 "malloc4" 00:19:01.073 ], 00:19:01.073 "superblock": false, 00:19:01.073 "method": "bdev_raid_create", 00:19:01.073 "req_id": 1 00:19:01.073 } 00:19:01.073 Got JSON-RPC error response 00:19:01.073 response: 00:19:01.073 { 00:19:01.073 "code": -17, 00:19:01.073 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:01.073 } 00:19:01.073 21:01:29 -- common/autotest_common.sh@643 -- # es=1 00:19:01.073 21:01:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:01.073 21:01:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:01.073 21:01:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:01.073 21:01:29 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.073 21:01:29 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:01.073 21:01:29 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:01.073 21:01:29 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:01.073 21:01:29 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.331 [2024-06-09 21:01:29.392598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.331 [2024-06-09 21:01:29.392700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.331 [2024-06-09 21:01:29.392740] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:01.331 [2024-06-09 21:01:29.392768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.331 [2024-06-09 21:01:29.395085] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.331 [2024-06-09 21:01:29.395161] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.331 [2024-06-09 21:01:29.395273] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:01.331 [2024-06-09 21:01:29.395327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:01.331 pt1 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.331 21:01:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.332 21:01:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.332 21:01:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.332 21:01:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.589 21:01:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.589 "name": "raid_bdev1", 00:19:01.589 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:01.589 "strip_size_kb": 0, 00:19:01.589 "state": "configuring", 00:19:01.589 "raid_level": "raid1", 00:19:01.589 "superblock": true, 00:19:01.589 "num_base_bdevs": 4, 00:19:01.589 "num_base_bdevs_discovered": 1, 00:19:01.589 "num_base_bdevs_operational": 4, 00:19:01.589 "base_bdevs_list": [ 00:19:01.589 { 00:19:01.589 "name": "pt1", 00:19:01.589 "uuid": "4a589580-9ff4-5e50-8c3c-73ed274b5b33", 00:19:01.589 "is_configured": true, 00:19:01.589 "data_offset": 2048, 00:19:01.589 "data_size": 63488 00:19:01.589 }, 00:19:01.589 { 00:19:01.589 "name": null, 00:19:01.589 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:01.589 "is_configured": false, 00:19:01.589 "data_offset": 2048, 00:19:01.589 "data_size": 63488 00:19:01.589 }, 00:19:01.589 { 00:19:01.589 "name": null, 00:19:01.589 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:01.589 "is_configured": false, 00:19:01.589 "data_offset": 2048, 00:19:01.589 "data_size": 63488 00:19:01.589 }, 00:19:01.589 { 00:19:01.589 "name": null, 00:19:01.589 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:01.589 "is_configured": false, 00:19:01.589 "data_offset": 2048, 00:19:01.589 "data_size": 63488 00:19:01.589 } 00:19:01.589 ] 00:19:01.589 }' 00:19:01.589 21:01:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.589 21:01:29 -- common/autotest_common.sh@10 -- # set +x 00:19:02.156 21:01:30 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:02.156 21:01:30 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.415 [2024-06-09 21:01:30.412815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.415 [2024-06-09 21:01:30.413097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.415 [2024-06-09 21:01:30.413294] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:02.415 [2024-06-09 21:01:30.413430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.415 [2024-06-09 21:01:30.414001] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.415 [2024-06-09 21:01:30.414219] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.415 [2024-06-09 21:01:30.414445] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:02.415 [2024-06-09 21:01:30.414590] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.415 pt2 00:19:02.415 21:01:30 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:02.674 [2024-06-09 21:01:30.672854] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.674 21:01:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.933 21:01:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.933 "name": "raid_bdev1", 00:19:02.933 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:02.933 "strip_size_kb": 0, 00:19:02.933 "state": "configuring", 00:19:02.933 "raid_level": "raid1", 00:19:02.933 "superblock": true, 00:19:02.933 "num_base_bdevs": 4, 00:19:02.933 "num_base_bdevs_discovered": 1, 00:19:02.933 "num_base_bdevs_operational": 4, 00:19:02.933 "base_bdevs_list": [ 00:19:02.933 { 00:19:02.933 "name": "pt1", 00:19:02.933 "uuid": "4a589580-9ff4-5e50-8c3c-73ed274b5b33", 00:19:02.933 "is_configured": true, 00:19:02.933 "data_offset": 2048, 00:19:02.933 "data_size": 63488 00:19:02.933 }, 00:19:02.933 { 00:19:02.933 "name": null, 00:19:02.933 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:02.933 "is_configured": false, 00:19:02.933 "data_offset": 2048, 00:19:02.933 "data_size": 63488 00:19:02.933 }, 00:19:02.933 { 00:19:02.933 "name": null, 00:19:02.933 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:02.933 "is_configured": false, 00:19:02.933 "data_offset": 2048, 00:19:02.933 "data_size": 63488 00:19:02.933 }, 00:19:02.933 { 00:19:02.933 "name": null, 00:19:02.933 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:02.933 "is_configured": false, 00:19:02.933 "data_offset": 2048, 00:19:02.933 "data_size": 63488 00:19:02.933 } 00:19:02.933 ] 00:19:02.933 }' 00:19:02.933 21:01:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.933 21:01:30 -- common/autotest_common.sh@10 -- # set +x 00:19:03.500 21:01:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:03.500 21:01:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:03.500 21:01:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.759 [2024-06-09 21:01:31.705076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.759 [2024-06-09 21:01:31.705342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.759 [2024-06-09 21:01:31.705424] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:03.759 [2024-06-09 21:01:31.705597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.759 [2024-06-09 21:01:31.706181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.759 [2024-06-09 21:01:31.706413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.759 [2024-06-09 21:01:31.706652] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:03.759 [2024-06-09 21:01:31.706803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.759 pt2 00:19:03.759 21:01:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:03.759 21:01:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:03.759 21:01:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:04.018 [2024-06-09 21:01:31.969099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:04.018 [2024-06-09 21:01:31.969312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.018 [2024-06-09 21:01:31.969386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:04.018 [2024-06-09 21:01:31.969542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.018 [2024-06-09 21:01:31.970118] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.018 [2024-06-09 21:01:31.970333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:04.018 [2024-06-09 21:01:31.970596] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:04.018 [2024-06-09 21:01:31.970731] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:04.018 pt3 00:19:04.018 21:01:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:04.018 21:01:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:04.018 21:01:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:04.018 [2024-06-09 21:01:32.165154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:04.018 [2024-06-09 21:01:32.165398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.018 [2024-06-09 21:01:32.165557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:04.018 [2024-06-09 21:01:32.165698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.018 [2024-06-09 21:01:32.166244] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.018 [2024-06-09 21:01:32.166429] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:04.018 [2024-06-09 21:01:32.166669] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:04.018 [2024-06-09 21:01:32.166791] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:04.018 [2024-06-09 21:01:32.167038] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:19:04.018 [2024-06-09 21:01:32.167151] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:04.018 [2024-06-09 21:01:32.167308] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:04.018 [2024-06-09 21:01:32.167750] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:19:04.018 [2024-06-09 21:01:32.167877] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:19:04.018 [2024-06-09 21:01:32.168092] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.018 pt4 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.018 21:01:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.276 21:01:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.276 "name": "raid_bdev1", 00:19:04.276 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:04.276 "strip_size_kb": 0, 00:19:04.276 "state": "online", 00:19:04.276 "raid_level": "raid1", 00:19:04.276 "superblock": true, 00:19:04.276 "num_base_bdevs": 4, 00:19:04.276 "num_base_bdevs_discovered": 4, 00:19:04.276 "num_base_bdevs_operational": 4, 00:19:04.277 "base_bdevs_list": [ 00:19:04.277 { 00:19:04.277 "name": "pt1", 00:19:04.277 "uuid": "4a589580-9ff4-5e50-8c3c-73ed274b5b33", 00:19:04.277 "is_configured": true, 00:19:04.277 "data_offset": 2048, 00:19:04.277 "data_size": 63488 00:19:04.277 }, 00:19:04.277 { 00:19:04.277 "name": "pt2", 00:19:04.277 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:04.277 "is_configured": true, 00:19:04.277 "data_offset": 2048, 00:19:04.277 "data_size": 63488 00:19:04.277 }, 00:19:04.277 { 00:19:04.277 "name": "pt3", 00:19:04.277 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:04.277 "is_configured": true, 00:19:04.277 "data_offset": 2048, 00:19:04.277 "data_size": 63488 00:19:04.277 }, 00:19:04.277 { 00:19:04.277 "name": "pt4", 00:19:04.277 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:04.277 "is_configured": true, 00:19:04.277 "data_offset": 2048, 00:19:04.277 "data_size": 63488 00:19:04.277 } 00:19:04.277 ] 00:19:04.277 }' 00:19:04.277 21:01:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.277 21:01:32 -- common/autotest_common.sh@10 -- # set +x 00:19:04.843 21:01:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:04.843 21:01:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:05.101 [2024-06-09 21:01:33.221538] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.101 21:01:33 -- bdev/bdev_raid.sh@430 -- # '[' 399997bf-1392-470a-9493-b8a5a099b8b5 '!=' 399997bf-1392-470a-9493-b8a5a099b8b5 ']' 00:19:05.101 21:01:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:05.101 21:01:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:05.101 21:01:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:05.101 21:01:33 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:05.358 [2024-06-09 21:01:33.417430] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.358 21:01:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.616 21:01:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.616 "name": "raid_bdev1", 00:19:05.616 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:05.616 "strip_size_kb": 0, 00:19:05.616 "state": "online", 00:19:05.616 "raid_level": "raid1", 00:19:05.616 "superblock": true, 00:19:05.616 "num_base_bdevs": 4, 00:19:05.616 "num_base_bdevs_discovered": 3, 00:19:05.616 "num_base_bdevs_operational": 3, 00:19:05.616 "base_bdevs_list": [ 00:19:05.616 { 00:19:05.616 "name": null, 00:19:05.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.616 "is_configured": false, 00:19:05.616 "data_offset": 2048, 00:19:05.616 "data_size": 63488 00:19:05.616 }, 00:19:05.616 { 00:19:05.616 "name": "pt2", 00:19:05.616 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:05.616 "is_configured": true, 00:19:05.616 "data_offset": 2048, 00:19:05.616 "data_size": 63488 00:19:05.616 }, 00:19:05.616 { 00:19:05.616 "name": "pt3", 00:19:05.616 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:05.616 "is_configured": true, 00:19:05.616 "data_offset": 2048, 00:19:05.616 "data_size": 63488 00:19:05.616 }, 00:19:05.616 { 00:19:05.616 "name": "pt4", 00:19:05.616 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:05.616 "is_configured": true, 00:19:05.616 "data_offset": 2048, 00:19:05.616 "data_size": 63488 00:19:05.616 } 00:19:05.616 ] 00:19:05.616 }' 00:19:05.616 21:01:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.616 21:01:33 -- common/autotest_common.sh@10 -- # set +x 00:19:06.182 21:01:34 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:06.440 [2024-06-09 21:01:34.529641] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.440 [2024-06-09 21:01:34.529879] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.440 [2024-06-09 21:01:34.530063] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.440 [2024-06-09 21:01:34.530263] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.440 [2024-06-09 21:01:34.530382] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:19:06.440 21:01:34 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:06.440 21:01:34 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.698 21:01:34 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:06.698 21:01:34 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:06.698 21:01:34 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:06.698 21:01:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:06.698 21:01:34 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:06.956 21:01:34 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:06.956 21:01:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:06.956 21:01:34 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:06.957 21:01:35 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:06.957 21:01:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:06.957 21:01:35 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:07.215 21:01:35 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:07.215 21:01:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:07.215 21:01:35 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:07.215 21:01:35 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:07.215 21:01:35 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.474 [2024-06-09 21:01:35.525864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.474 [2024-06-09 21:01:35.526200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.474 [2024-06-09 21:01:35.526278] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:07.474 [2024-06-09 21:01:35.526422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.474 [2024-06-09 21:01:35.528944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.474 [2024-06-09 21:01:35.529156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.474 [2024-06-09 21:01:35.529402] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:07.474 [2024-06-09 21:01:35.529572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.474 pt2 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.474 21:01:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.733 21:01:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.733 "name": "raid_bdev1", 00:19:07.733 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:07.733 "strip_size_kb": 0, 00:19:07.733 "state": "configuring", 00:19:07.733 "raid_level": "raid1", 00:19:07.733 "superblock": true, 00:19:07.733 "num_base_bdevs": 4, 00:19:07.733 "num_base_bdevs_discovered": 1, 00:19:07.733 "num_base_bdevs_operational": 3, 00:19:07.733 "base_bdevs_list": [ 00:19:07.733 { 00:19:07.733 "name": null, 00:19:07.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.733 "is_configured": false, 00:19:07.733 "data_offset": 2048, 00:19:07.733 "data_size": 63488 00:19:07.733 }, 00:19:07.733 { 00:19:07.733 "name": "pt2", 00:19:07.733 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:07.733 "is_configured": true, 00:19:07.733 "data_offset": 2048, 00:19:07.733 "data_size": 63488 00:19:07.733 }, 00:19:07.733 { 00:19:07.733 "name": null, 00:19:07.733 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:07.733 "is_configured": false, 00:19:07.733 "data_offset": 2048, 00:19:07.733 "data_size": 63488 00:19:07.733 }, 00:19:07.733 { 00:19:07.733 "name": null, 00:19:07.733 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:07.733 "is_configured": false, 00:19:07.733 "data_offset": 2048, 00:19:07.733 "data_size": 63488 00:19:07.733 } 00:19:07.733 ] 00:19:07.733 }' 00:19:07.733 21:01:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.733 21:01:35 -- common/autotest_common.sh@10 -- # set +x 00:19:08.300 21:01:36 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:08.300 21:01:36 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:08.300 21:01:36 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:08.559 [2024-06-09 21:01:36.542151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:08.559 [2024-06-09 21:01:36.542435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.559 [2024-06-09 21:01:36.542525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:08.559 [2024-06-09 21:01:36.542679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.559 [2024-06-09 21:01:36.543341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.559 [2024-06-09 21:01:36.543549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:08.559 [2024-06-09 21:01:36.543777] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:08.559 [2024-06-09 21:01:36.543909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:08.559 pt3 00:19:08.559 21:01:36 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:08.559 21:01:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:08.559 21:01:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:08.559 21:01:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:08.559 21:01:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:08.560 21:01:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:08.560 21:01:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.560 21:01:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.560 21:01:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.560 21:01:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.560 21:01:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.560 21:01:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.818 21:01:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.818 "name": "raid_bdev1", 00:19:08.818 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:08.818 "strip_size_kb": 0, 00:19:08.818 "state": "configuring", 00:19:08.818 "raid_level": "raid1", 00:19:08.818 "superblock": true, 00:19:08.818 "num_base_bdevs": 4, 00:19:08.818 "num_base_bdevs_discovered": 2, 00:19:08.818 "num_base_bdevs_operational": 3, 00:19:08.818 "base_bdevs_list": [ 00:19:08.818 { 00:19:08.818 "name": null, 00:19:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.818 "is_configured": false, 00:19:08.818 "data_offset": 2048, 00:19:08.818 "data_size": 63488 00:19:08.818 }, 00:19:08.818 { 00:19:08.818 "name": "pt2", 00:19:08.818 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:08.818 "is_configured": true, 00:19:08.818 "data_offset": 2048, 00:19:08.818 "data_size": 63488 00:19:08.818 }, 00:19:08.818 { 00:19:08.818 "name": "pt3", 00:19:08.818 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:08.818 "is_configured": true, 00:19:08.818 "data_offset": 2048, 00:19:08.818 "data_size": 63488 00:19:08.818 }, 00:19:08.818 { 00:19:08.818 "name": null, 00:19:08.818 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:08.818 "is_configured": false, 00:19:08.818 "data_offset": 2048, 00:19:08.818 "data_size": 63488 00:19:08.818 } 00:19:08.818 ] 00:19:08.818 }' 00:19:08.818 21:01:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.818 21:01:36 -- common/autotest_common.sh@10 -- # set +x 00:19:09.387 21:01:37 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:09.387 21:01:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:09.387 21:01:37 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:09.387 21:01:37 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:09.646 [2024-06-09 21:01:37.610314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:09.646 [2024-06-09 21:01:37.610561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.646 [2024-06-09 21:01:37.610645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:09.646 [2024-06-09 21:01:37.610773] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.646 [2024-06-09 21:01:37.611375] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.646 [2024-06-09 21:01:37.611559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:09.646 [2024-06-09 21:01:37.611761] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:09.646 [2024-06-09 21:01:37.611882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:09.646 [2024-06-09 21:01:37.612067] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:19:09.646 [2024-06-09 21:01:37.612205] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:09.646 [2024-06-09 21:01:37.612380] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:09.646 [2024-06-09 21:01:37.612876] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:19:09.646 [2024-06-09 21:01:37.613005] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:19:09.646 [2024-06-09 21:01:37.613226] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.646 pt4 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.646 21:01:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.905 21:01:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.905 "name": "raid_bdev1", 00:19:09.905 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:09.905 "strip_size_kb": 0, 00:19:09.905 "state": "online", 00:19:09.905 "raid_level": "raid1", 00:19:09.905 "superblock": true, 00:19:09.905 "num_base_bdevs": 4, 00:19:09.905 "num_base_bdevs_discovered": 3, 00:19:09.905 "num_base_bdevs_operational": 3, 00:19:09.905 "base_bdevs_list": [ 00:19:09.905 { 00:19:09.905 "name": null, 00:19:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.905 "is_configured": false, 00:19:09.905 "data_offset": 2048, 00:19:09.905 "data_size": 63488 00:19:09.905 }, 00:19:09.905 { 00:19:09.905 "name": "pt2", 00:19:09.905 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:09.905 "is_configured": true, 00:19:09.905 "data_offset": 2048, 00:19:09.905 "data_size": 63488 00:19:09.905 }, 00:19:09.905 { 00:19:09.905 "name": "pt3", 00:19:09.905 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:09.905 "is_configured": true, 00:19:09.905 "data_offset": 2048, 00:19:09.905 "data_size": 63488 00:19:09.905 }, 00:19:09.905 { 00:19:09.905 "name": "pt4", 00:19:09.905 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:09.905 "is_configured": true, 00:19:09.905 "data_offset": 2048, 00:19:09.905 "data_size": 63488 00:19:09.905 } 00:19:09.905 ] 00:19:09.905 }' 00:19:09.905 21:01:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.905 21:01:37 -- common/autotest_common.sh@10 -- # set +x 00:19:10.474 21:01:38 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:10.474 21:01:38 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:10.734 [2024-06-09 21:01:38.730477] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:10.734 [2024-06-09 21:01:38.730647] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.734 [2024-06-09 21:01:38.730800] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.734 [2024-06-09 21:01:38.731008] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.734 [2024-06-09 21:01:38.731127] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:19:10.734 21:01:38 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:10.734 21:01:38 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.993 21:01:38 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:10.993 21:01:38 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:10.993 21:01:38 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:11.252 [2024-06-09 21:01:39.202631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:11.252 [2024-06-09 21:01:39.202970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.252 [2024-06-09 21:01:39.203063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:11.252 [2024-06-09 21:01:39.203279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.252 [2024-06-09 21:01:39.206118] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.252 [2024-06-09 21:01:39.206346] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:11.252 [2024-06-09 21:01:39.206602] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:11.252 [2024-06-09 21:01:39.206769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:11.252 pt1 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.252 "name": "raid_bdev1", 00:19:11.252 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:11.252 "strip_size_kb": 0, 00:19:11.252 "state": "configuring", 00:19:11.252 "raid_level": "raid1", 00:19:11.252 "superblock": true, 00:19:11.252 "num_base_bdevs": 4, 00:19:11.252 "num_base_bdevs_discovered": 1, 00:19:11.252 "num_base_bdevs_operational": 4, 00:19:11.252 "base_bdevs_list": [ 00:19:11.252 { 00:19:11.252 "name": "pt1", 00:19:11.252 "uuid": "4a589580-9ff4-5e50-8c3c-73ed274b5b33", 00:19:11.252 "is_configured": true, 00:19:11.252 "data_offset": 2048, 00:19:11.252 "data_size": 63488 00:19:11.252 }, 00:19:11.252 { 00:19:11.252 "name": null, 00:19:11.252 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:11.252 "is_configured": false, 00:19:11.252 "data_offset": 2048, 00:19:11.252 "data_size": 63488 00:19:11.252 }, 00:19:11.252 { 00:19:11.252 "name": null, 00:19:11.252 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:11.252 "is_configured": false, 00:19:11.252 "data_offset": 2048, 00:19:11.252 "data_size": 63488 00:19:11.252 }, 00:19:11.252 { 00:19:11.252 "name": null, 00:19:11.252 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:11.252 "is_configured": false, 00:19:11.252 "data_offset": 2048, 00:19:11.252 "data_size": 63488 00:19:11.252 } 00:19:11.252 ] 00:19:11.252 }' 00:19:11.252 21:01:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.252 21:01:39 -- common/autotest_common.sh@10 -- # set +x 00:19:12.188 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:19:12.188 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:12.188 21:01:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:12.188 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:12.188 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:12.188 21:01:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:12.446 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:12.446 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:12.446 21:01:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:12.446 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:12.446 21:01:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:12.446 21:01:40 -- bdev/bdev_raid.sh@489 -- # i=3 00:19:12.446 21:01:40 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:12.704 [2024-06-09 21:01:40.802966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:12.704 [2024-06-09 21:01:40.803298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.704 [2024-06-09 21:01:40.803457] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:19:12.704 [2024-06-09 21:01:40.803625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.704 [2024-06-09 21:01:40.804216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.704 [2024-06-09 21:01:40.804379] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:12.704 [2024-06-09 21:01:40.804587] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:12.704 [2024-06-09 21:01:40.804691] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:12.704 [2024-06-09 21:01:40.804785] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.704 [2024-06-09 21:01:40.804848] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:19:12.704 [2024-06-09 21:01:40.805084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:12.704 pt4 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.704 21:01:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.963 21:01:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.963 "name": "raid_bdev1", 00:19:12.963 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:12.963 "strip_size_kb": 0, 00:19:12.963 "state": "configuring", 00:19:12.963 "raid_level": "raid1", 00:19:12.963 "superblock": true, 00:19:12.963 "num_base_bdevs": 4, 00:19:12.963 "num_base_bdevs_discovered": 1, 00:19:12.963 "num_base_bdevs_operational": 3, 00:19:12.963 "base_bdevs_list": [ 00:19:12.963 { 00:19:12.963 "name": null, 00:19:12.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.963 "is_configured": false, 00:19:12.963 "data_offset": 2048, 00:19:12.963 "data_size": 63488 00:19:12.963 }, 00:19:12.963 { 00:19:12.963 "name": null, 00:19:12.963 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:12.963 "is_configured": false, 00:19:12.963 "data_offset": 2048, 00:19:12.963 "data_size": 63488 00:19:12.963 }, 00:19:12.963 { 00:19:12.963 "name": null, 00:19:12.963 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:12.963 "is_configured": false, 00:19:12.963 "data_offset": 2048, 00:19:12.963 "data_size": 63488 00:19:12.963 }, 00:19:12.963 { 00:19:12.963 "name": "pt4", 00:19:12.963 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:12.963 "is_configured": true, 00:19:12.963 "data_offset": 2048, 00:19:12.963 "data_size": 63488 00:19:12.963 } 00:19:12.963 ] 00:19:12.963 }' 00:19:12.963 21:01:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.963 21:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:13.530 21:01:41 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:19:13.530 21:01:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:13.530 21:01:41 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:13.788 [2024-06-09 21:01:41.843171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:13.788 [2024-06-09 21:01:41.843459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.788 [2024-06-09 21:01:41.843535] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:19:13.788 [2024-06-09 21:01:41.843711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.788 [2024-06-09 21:01:41.844319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.788 [2024-06-09 21:01:41.844510] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:13.788 [2024-06-09 21:01:41.844738] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:13.788 [2024-06-09 21:01:41.844869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:13.788 pt2 00:19:13.788 21:01:41 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:13.788 21:01:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:13.788 21:01:41 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:14.046 [2024-06-09 21:01:42.111221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:14.046 [2024-06-09 21:01:42.111471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.046 [2024-06-09 21:01:42.111540] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:19:14.046 [2024-06-09 21:01:42.111655] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.046 [2024-06-09 21:01:42.112099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.046 [2024-06-09 21:01:42.112289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:14.046 [2024-06-09 21:01:42.112504] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:14.046 [2024-06-09 21:01:42.112625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:14.046 [2024-06-09 21:01:42.112797] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:19:14.046 [2024-06-09 21:01:42.112934] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:14.046 [2024-06-09 21:01:42.113074] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:19:14.046 [2024-06-09 21:01:42.113648] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:19:14.046 [2024-06-09 21:01:42.113783] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:19:14.046 [2024-06-09 21:01:42.114028] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.046 pt3 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.046 21:01:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.304 21:01:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.304 "name": "raid_bdev1", 00:19:14.304 "uuid": "399997bf-1392-470a-9493-b8a5a099b8b5", 00:19:14.304 "strip_size_kb": 0, 00:19:14.304 "state": "online", 00:19:14.304 "raid_level": "raid1", 00:19:14.304 "superblock": true, 00:19:14.304 "num_base_bdevs": 4, 00:19:14.304 "num_base_bdevs_discovered": 3, 00:19:14.304 "num_base_bdevs_operational": 3, 00:19:14.304 "base_bdevs_list": [ 00:19:14.304 { 00:19:14.304 "name": null, 00:19:14.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.304 "is_configured": false, 00:19:14.304 "data_offset": 2048, 00:19:14.304 "data_size": 63488 00:19:14.304 }, 00:19:14.304 { 00:19:14.304 "name": "pt2", 00:19:14.304 "uuid": "287101ea-5243-5f2d-b9aa-8341463254b7", 00:19:14.304 "is_configured": true, 00:19:14.304 "data_offset": 2048, 00:19:14.304 "data_size": 63488 00:19:14.304 }, 00:19:14.304 { 00:19:14.304 "name": "pt3", 00:19:14.304 "uuid": "76c85311-540a-5e86-b535-e77e06695e83", 00:19:14.304 "is_configured": true, 00:19:14.304 "data_offset": 2048, 00:19:14.304 "data_size": 63488 00:19:14.304 }, 00:19:14.304 { 00:19:14.304 "name": "pt4", 00:19:14.304 "uuid": "4c0f5d2c-3218-5e05-bf9a-ec8daacb68f4", 00:19:14.304 "is_configured": true, 00:19:14.304 "data_offset": 2048, 00:19:14.304 "data_size": 63488 00:19:14.304 } 00:19:14.304 ] 00:19:14.304 }' 00:19:14.304 21:01:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.304 21:01:42 -- common/autotest_common.sh@10 -- # set +x 00:19:14.869 21:01:42 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:14.869 21:01:42 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:15.127 [2024-06-09 21:01:43.175856] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.128 21:01:43 -- bdev/bdev_raid.sh@506 -- # '[' 399997bf-1392-470a-9493-b8a5a099b8b5 '!=' 399997bf-1392-470a-9493-b8a5a099b8b5 ']' 00:19:15.128 21:01:43 -- bdev/bdev_raid.sh@511 -- # killprocess 121122 00:19:15.128 21:01:43 -- common/autotest_common.sh@926 -- # '[' -z 121122 ']' 00:19:15.128 21:01:43 -- common/autotest_common.sh@930 -- # kill -0 121122 00:19:15.128 21:01:43 -- common/autotest_common.sh@931 -- # uname 00:19:15.128 21:01:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.128 21:01:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121122 00:19:15.128 killing process with pid 121122 00:19:15.128 21:01:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:15.128 21:01:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:15.128 21:01:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121122' 00:19:15.128 21:01:43 -- common/autotest_common.sh@945 -- # kill 121122 00:19:15.128 [2024-06-09 21:01:43.225285] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.128 21:01:43 -- common/autotest_common.sh@950 -- # wait 121122 00:19:15.128 [2024-06-09 21:01:43.225374] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.128 [2024-06-09 21:01:43.225474] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.128 [2024-06-09 21:01:43.225489] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:19:15.387 [2024-06-09 21:01:43.511084] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.322 ************************************ 00:19:16.322 END TEST raid_superblock_test 00:19:16.322 ************************************ 00:19:16.322 21:01:44 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:16.322 00:19:16.322 real 0m21.415s 00:19:16.322 user 0m39.335s 00:19:16.322 sys 0m2.554s 00:19:16.322 21:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.322 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:19:16.322 21:01:44 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:16.322 21:01:44 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:16.322 21:01:44 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:16.322 21:01:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:16.322 21:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:16.322 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:19:16.582 ************************************ 00:19:16.582 START TEST raid_rebuild_test 00:19:16.582 ************************************ 00:19:16.582 21:01:44 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@544 -- # raid_pid=121795 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@545 -- # waitforlisten 121795 /var/tmp/spdk-raid.sock 00:19:16.582 21:01:44 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:16.582 21:01:44 -- common/autotest_common.sh@819 -- # '[' -z 121795 ']' 00:19:16.582 21:01:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:16.582 21:01:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:16.582 21:01:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:16.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:16.582 21:01:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:16.582 21:01:44 -- common/autotest_common.sh@10 -- # set +x 00:19:16.582 [2024-06-09 21:01:44.566593] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:16.582 [2024-06-09 21:01:44.566964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121795 ] 00:19:16.582 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:16.582 Zero copy mechanism will not be used. 00:19:16.582 [2024-06-09 21:01:44.718825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.841 [2024-06-09 21:01:44.886479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.099 [2024-06-09 21:01:45.056902] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.358 21:01:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:17.358 21:01:45 -- common/autotest_common.sh@852 -- # return 0 00:19:17.358 21:01:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:17.358 21:01:45 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:17.358 21:01:45 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.617 BaseBdev1 00:19:17.617 21:01:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:17.617 21:01:45 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:17.617 21:01:45 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:17.875 BaseBdev2 00:19:17.875 21:01:45 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:18.134 spare_malloc 00:19:18.134 21:01:46 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:18.393 spare_delay 00:19:18.393 21:01:46 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:18.651 [2024-06-09 21:01:46.639454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.651 [2024-06-09 21:01:46.639718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.651 [2024-06-09 21:01:46.639793] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:18.651 [2024-06-09 21:01:46.640056] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.651 [2024-06-09 21:01:46.642584] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.651 [2024-06-09 21:01:46.642772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.651 spare 00:19:18.651 21:01:46 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:18.910 [2024-06-09 21:01:46.839664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.910 [2024-06-09 21:01:46.841545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.910 [2024-06-09 21:01:46.841781] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:19:18.910 [2024-06-09 21:01:46.841827] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:18.910 [2024-06-09 21:01:46.842053] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:18.910 [2024-06-09 21:01:46.842511] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:19:18.910 [2024-06-09 21:01:46.842637] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:19:18.910 [2024-06-09 21:01:46.842921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.910 21:01:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.910 21:01:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.910 "name": "raid_bdev1", 00:19:18.910 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:18.910 "strip_size_kb": 0, 00:19:18.910 "state": "online", 00:19:18.910 "raid_level": "raid1", 00:19:18.910 "superblock": false, 00:19:18.910 "num_base_bdevs": 2, 00:19:18.910 "num_base_bdevs_discovered": 2, 00:19:18.910 "num_base_bdevs_operational": 2, 00:19:18.910 "base_bdevs_list": [ 00:19:18.910 { 00:19:18.910 "name": "BaseBdev1", 00:19:18.910 "uuid": "8d053af8-23c0-4582-b6d8-d788994392af", 00:19:18.910 "is_configured": true, 00:19:18.910 "data_offset": 0, 00:19:18.911 "data_size": 65536 00:19:18.911 }, 00:19:18.911 { 00:19:18.911 "name": "BaseBdev2", 00:19:18.911 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:18.911 "is_configured": true, 00:19:18.911 "data_offset": 0, 00:19:18.911 "data_size": 65536 00:19:18.911 } 00:19:18.911 ] 00:19:18.911 }' 00:19:18.911 21:01:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.911 21:01:47 -- common/autotest_common.sh@10 -- # set +x 00:19:19.847 21:01:47 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:19.847 21:01:47 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:19.847 [2024-06-09 21:01:47.912021] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.847 21:01:47 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:19.847 21:01:47 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.847 21:01:47 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:20.105 21:01:48 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:20.105 21:01:48 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:20.106 21:01:48 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:20.106 21:01:48 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@12 -- # local i 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:20.106 21:01:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:20.367 [2024-06-09 21:01:48.415964] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:20.367 /dev/nbd0 00:19:20.367 21:01:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:20.367 21:01:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:20.367 21:01:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:20.367 21:01:48 -- common/autotest_common.sh@857 -- # local i 00:19:20.367 21:01:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:20.367 21:01:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:20.367 21:01:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:20.367 21:01:48 -- common/autotest_common.sh@861 -- # break 00:19:20.367 21:01:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:20.367 21:01:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:20.367 21:01:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:20.367 1+0 records in 00:19:20.367 1+0 records out 00:19:20.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552314 s, 7.4 MB/s 00:19:20.367 21:01:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.367 21:01:48 -- common/autotest_common.sh@874 -- # size=4096 00:19:20.367 21:01:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:20.367 21:01:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:20.367 21:01:48 -- common/autotest_common.sh@877 -- # return 0 00:19:20.367 21:01:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:20.367 21:01:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:20.367 21:01:48 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:20.367 21:01:48 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:20.367 21:01:48 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:25.631 65536+0 records in 00:19:25.631 65536+0 records out 00:19:25.631 33554432 bytes (34 MB, 32 MiB) copied, 4.70275 s, 7.1 MB/s 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@51 -- # local i 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@41 -- # break 00:19:25.631 21:01:53 -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:25.631 [2024-06-09 21:01:53.448781] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.631 [2024-06-09 21:01:53.632427] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.631 21:01:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.890 21:01:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.890 "name": "raid_bdev1", 00:19:25.890 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:25.890 "strip_size_kb": 0, 00:19:25.890 "state": "online", 00:19:25.890 "raid_level": "raid1", 00:19:25.890 "superblock": false, 00:19:25.890 "num_base_bdevs": 2, 00:19:25.890 "num_base_bdevs_discovered": 1, 00:19:25.890 "num_base_bdevs_operational": 1, 00:19:25.890 "base_bdevs_list": [ 00:19:25.890 { 00:19:25.890 "name": null, 00:19:25.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.890 "is_configured": false, 00:19:25.890 "data_offset": 0, 00:19:25.890 "data_size": 65536 00:19:25.890 }, 00:19:25.890 { 00:19:25.890 "name": "BaseBdev2", 00:19:25.890 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:25.890 "is_configured": true, 00:19:25.890 "data_offset": 0, 00:19:25.890 "data_size": 65536 00:19:25.890 } 00:19:25.890 ] 00:19:25.890 }' 00:19:25.890 21:01:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.890 21:01:53 -- common/autotest_common.sh@10 -- # set +x 00:19:26.456 21:01:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.714 [2024-06-09 21:01:54.704677] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:26.714 [2024-06-09 21:01:54.704878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.714 [2024-06-09 21:01:54.717674] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:19:26.714 [2024-06-09 21:01:54.719770] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:26.714 21:01:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:27.649 21:01:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.649 21:01:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:27.649 21:01:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:27.649 21:01:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:27.649 21:01:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:27.649 21:01:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.649 21:01:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.906 21:01:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:27.906 "name": "raid_bdev1", 00:19:27.907 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:27.907 "strip_size_kb": 0, 00:19:27.907 "state": "online", 00:19:27.907 "raid_level": "raid1", 00:19:27.907 "superblock": false, 00:19:27.907 "num_base_bdevs": 2, 00:19:27.907 "num_base_bdevs_discovered": 2, 00:19:27.907 "num_base_bdevs_operational": 2, 00:19:27.907 "process": { 00:19:27.907 "type": "rebuild", 00:19:27.907 "target": "spare", 00:19:27.907 "progress": { 00:19:27.907 "blocks": 24576, 00:19:27.907 "percent": 37 00:19:27.907 } 00:19:27.907 }, 00:19:27.907 "base_bdevs_list": [ 00:19:27.907 { 00:19:27.907 "name": "spare", 00:19:27.907 "uuid": "f1d60763-c961-5323-85f6-dcc691b119cd", 00:19:27.907 "is_configured": true, 00:19:27.907 "data_offset": 0, 00:19:27.907 "data_size": 65536 00:19:27.907 }, 00:19:27.907 { 00:19:27.907 "name": "BaseBdev2", 00:19:27.907 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:27.907 "is_configured": true, 00:19:27.907 "data_offset": 0, 00:19:27.907 "data_size": 65536 00:19:27.907 } 00:19:27.907 ] 00:19:27.907 }' 00:19:27.907 21:01:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:27.907 21:01:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.907 21:01:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:27.907 21:01:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.907 21:01:56 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:28.165 [2024-06-09 21:01:56.298196] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:28.165 [2024-06-09 21:01:56.328625] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:28.165 [2024-06-09 21:01:56.328886] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.424 21:01:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.682 21:01:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.682 "name": "raid_bdev1", 00:19:28.682 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:28.682 "strip_size_kb": 0, 00:19:28.682 "state": "online", 00:19:28.682 "raid_level": "raid1", 00:19:28.682 "superblock": false, 00:19:28.682 "num_base_bdevs": 2, 00:19:28.682 "num_base_bdevs_discovered": 1, 00:19:28.682 "num_base_bdevs_operational": 1, 00:19:28.682 "base_bdevs_list": [ 00:19:28.682 { 00:19:28.682 "name": null, 00:19:28.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.682 "is_configured": false, 00:19:28.682 "data_offset": 0, 00:19:28.682 "data_size": 65536 00:19:28.682 }, 00:19:28.682 { 00:19:28.682 "name": "BaseBdev2", 00:19:28.682 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:28.682 "is_configured": true, 00:19:28.682 "data_offset": 0, 00:19:28.682 "data_size": 65536 00:19:28.682 } 00:19:28.682 ] 00:19:28.682 }' 00:19:28.682 21:01:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.682 21:01:56 -- common/autotest_common.sh@10 -- # set +x 00:19:29.250 21:01:57 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.250 21:01:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:29.250 21:01:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:29.250 21:01:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:29.250 21:01:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:29.250 21:01:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.250 21:01:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.508 21:01:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:29.508 "name": "raid_bdev1", 00:19:29.508 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:29.508 "strip_size_kb": 0, 00:19:29.508 "state": "online", 00:19:29.508 "raid_level": "raid1", 00:19:29.508 "superblock": false, 00:19:29.508 "num_base_bdevs": 2, 00:19:29.508 "num_base_bdevs_discovered": 1, 00:19:29.508 "num_base_bdevs_operational": 1, 00:19:29.508 "base_bdevs_list": [ 00:19:29.508 { 00:19:29.508 "name": null, 00:19:29.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.508 "is_configured": false, 00:19:29.508 "data_offset": 0, 00:19:29.508 "data_size": 65536 00:19:29.508 }, 00:19:29.508 { 00:19:29.508 "name": "BaseBdev2", 00:19:29.508 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:29.508 "is_configured": true, 00:19:29.508 "data_offset": 0, 00:19:29.508 "data_size": 65536 00:19:29.508 } 00:19:29.508 ] 00:19:29.508 }' 00:19:29.508 21:01:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:29.508 21:01:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:29.508 21:01:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:29.508 21:01:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:29.508 21:01:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:29.766 [2024-06-09 21:01:57.758467] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:29.766 [2024-06-09 21:01:57.758886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.766 [2024-06-09 21:01:57.772211] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:19:29.766 [2024-06-09 21:01:57.774479] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.766 21:01:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:30.698 21:01:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.698 21:01:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:30.698 21:01:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:30.698 21:01:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:30.698 21:01:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:30.699 21:01:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.699 21:01:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.957 21:01:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:30.957 "name": "raid_bdev1", 00:19:30.957 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:30.957 "strip_size_kb": 0, 00:19:30.957 "state": "online", 00:19:30.957 "raid_level": "raid1", 00:19:30.957 "superblock": false, 00:19:30.957 "num_base_bdevs": 2, 00:19:30.957 "num_base_bdevs_discovered": 2, 00:19:30.957 "num_base_bdevs_operational": 2, 00:19:30.957 "process": { 00:19:30.957 "type": "rebuild", 00:19:30.957 "target": "spare", 00:19:30.957 "progress": { 00:19:30.957 "blocks": 22528, 00:19:30.957 "percent": 34 00:19:30.957 } 00:19:30.957 }, 00:19:30.957 "base_bdevs_list": [ 00:19:30.957 { 00:19:30.957 "name": "spare", 00:19:30.957 "uuid": "f1d60763-c961-5323-85f6-dcc691b119cd", 00:19:30.957 "is_configured": true, 00:19:30.957 "data_offset": 0, 00:19:30.957 "data_size": 65536 00:19:30.957 }, 00:19:30.957 { 00:19:30.957 "name": "BaseBdev2", 00:19:30.957 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:30.957 "is_configured": true, 00:19:30.957 "data_offset": 0, 00:19:30.957 "data_size": 65536 00:19:30.957 } 00:19:30.957 ] 00:19:30.957 }' 00:19:30.957 21:01:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@657 -- # local timeout=391 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.957 21:01:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.215 21:01:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:31.215 "name": "raid_bdev1", 00:19:31.216 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:31.216 "strip_size_kb": 0, 00:19:31.216 "state": "online", 00:19:31.216 "raid_level": "raid1", 00:19:31.216 "superblock": false, 00:19:31.216 "num_base_bdevs": 2, 00:19:31.216 "num_base_bdevs_discovered": 2, 00:19:31.216 "num_base_bdevs_operational": 2, 00:19:31.216 "process": { 00:19:31.216 "type": "rebuild", 00:19:31.216 "target": "spare", 00:19:31.216 "progress": { 00:19:31.216 "blocks": 30720, 00:19:31.216 "percent": 46 00:19:31.216 } 00:19:31.216 }, 00:19:31.216 "base_bdevs_list": [ 00:19:31.216 { 00:19:31.216 "name": "spare", 00:19:31.216 "uuid": "f1d60763-c961-5323-85f6-dcc691b119cd", 00:19:31.216 "is_configured": true, 00:19:31.216 "data_offset": 0, 00:19:31.216 "data_size": 65536 00:19:31.216 }, 00:19:31.216 { 00:19:31.216 "name": "BaseBdev2", 00:19:31.216 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:31.216 "is_configured": true, 00:19:31.216 "data_offset": 0, 00:19:31.216 "data_size": 65536 00:19:31.216 } 00:19:31.216 ] 00:19:31.216 }' 00:19:31.216 21:01:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:31.216 21:01:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.216 21:01:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:31.474 21:01:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.474 21:01:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.408 21:02:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.667 21:02:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:32.667 "name": "raid_bdev1", 00:19:32.667 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:32.667 "strip_size_kb": 0, 00:19:32.667 "state": "online", 00:19:32.667 "raid_level": "raid1", 00:19:32.667 "superblock": false, 00:19:32.667 "num_base_bdevs": 2, 00:19:32.667 "num_base_bdevs_discovered": 2, 00:19:32.667 "num_base_bdevs_operational": 2, 00:19:32.667 "process": { 00:19:32.667 "type": "rebuild", 00:19:32.667 "target": "spare", 00:19:32.667 "progress": { 00:19:32.667 "blocks": 57344, 00:19:32.667 "percent": 87 00:19:32.667 } 00:19:32.667 }, 00:19:32.667 "base_bdevs_list": [ 00:19:32.667 { 00:19:32.667 "name": "spare", 00:19:32.667 "uuid": "f1d60763-c961-5323-85f6-dcc691b119cd", 00:19:32.667 "is_configured": true, 00:19:32.667 "data_offset": 0, 00:19:32.667 "data_size": 65536 00:19:32.667 }, 00:19:32.667 { 00:19:32.667 "name": "BaseBdev2", 00:19:32.667 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:32.667 "is_configured": true, 00:19:32.667 "data_offset": 0, 00:19:32.667 "data_size": 65536 00:19:32.667 } 00:19:32.667 ] 00:19:32.667 }' 00:19:32.667 21:02:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:32.667 21:02:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.667 21:02:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:32.667 21:02:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.667 21:02:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:32.926 [2024-06-09 21:02:00.995695] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:32.926 [2024-06-09 21:02:00.996078] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:32.926 [2024-06-09 21:02:00.996291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:33.859 "name": "raid_bdev1", 00:19:33.859 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:33.859 "strip_size_kb": 0, 00:19:33.859 "state": "online", 00:19:33.859 "raid_level": "raid1", 00:19:33.859 "superblock": false, 00:19:33.859 "num_base_bdevs": 2, 00:19:33.859 "num_base_bdevs_discovered": 2, 00:19:33.859 "num_base_bdevs_operational": 2, 00:19:33.859 "base_bdevs_list": [ 00:19:33.859 { 00:19:33.859 "name": "spare", 00:19:33.859 "uuid": "f1d60763-c961-5323-85f6-dcc691b119cd", 00:19:33.859 "is_configured": true, 00:19:33.859 "data_offset": 0, 00:19:33.859 "data_size": 65536 00:19:33.859 }, 00:19:33.859 { 00:19:33.859 "name": "BaseBdev2", 00:19:33.859 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:33.859 "is_configured": true, 00:19:33.859 "data_offset": 0, 00:19:33.859 "data_size": 65536 00:19:33.859 } 00:19:33.859 ] 00:19:33.859 }' 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:33.859 21:02:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:33.859 21:02:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:33.859 21:02:02 -- bdev/bdev_raid.sh@660 -- # break 00:19:33.859 21:02:02 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.859 21:02:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:33.859 21:02:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:33.859 21:02:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:33.859 21:02:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:34.117 21:02:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.117 21:02:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.117 21:02:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:34.117 "name": "raid_bdev1", 00:19:34.117 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:34.117 "strip_size_kb": 0, 00:19:34.117 "state": "online", 00:19:34.117 "raid_level": "raid1", 00:19:34.117 "superblock": false, 00:19:34.117 "num_base_bdevs": 2, 00:19:34.117 "num_base_bdevs_discovered": 2, 00:19:34.117 "num_base_bdevs_operational": 2, 00:19:34.117 "base_bdevs_list": [ 00:19:34.117 { 00:19:34.117 "name": "spare", 00:19:34.117 "uuid": "f1d60763-c961-5323-85f6-dcc691b119cd", 00:19:34.117 "is_configured": true, 00:19:34.117 "data_offset": 0, 00:19:34.117 "data_size": 65536 00:19:34.117 }, 00:19:34.117 { 00:19:34.117 "name": "BaseBdev2", 00:19:34.117 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:34.117 "is_configured": true, 00:19:34.117 "data_offset": 0, 00:19:34.117 "data_size": 65536 00:19:34.117 } 00:19:34.117 ] 00:19:34.117 }' 00:19:34.117 21:02:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.375 21:02:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.634 21:02:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.634 "name": "raid_bdev1", 00:19:34.634 "uuid": "af1b6b5d-03ae-4fa5-afbb-c471d148a9da", 00:19:34.634 "strip_size_kb": 0, 00:19:34.634 "state": "online", 00:19:34.634 "raid_level": "raid1", 00:19:34.634 "superblock": false, 00:19:34.634 "num_base_bdevs": 2, 00:19:34.634 "num_base_bdevs_discovered": 2, 00:19:34.634 "num_base_bdevs_operational": 2, 00:19:34.634 "base_bdevs_list": [ 00:19:34.634 { 00:19:34.634 "name": "spare", 00:19:34.634 "uuid": "f1d60763-c961-5323-85f6-dcc691b119cd", 00:19:34.634 "is_configured": true, 00:19:34.634 "data_offset": 0, 00:19:34.634 "data_size": 65536 00:19:34.634 }, 00:19:34.634 { 00:19:34.634 "name": "BaseBdev2", 00:19:34.634 "uuid": "6b23558b-3e8c-4101-898a-3c64646bab35", 00:19:34.634 "is_configured": true, 00:19:34.634 "data_offset": 0, 00:19:34.634 "data_size": 65536 00:19:34.634 } 00:19:34.634 ] 00:19:34.634 }' 00:19:34.634 21:02:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.634 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:19:35.200 21:02:03 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:35.458 [2024-06-09 21:02:03.422931] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:35.458 [2024-06-09 21:02:03.424465] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.458 [2024-06-09 21:02:03.424983] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.458 [2024-06-09 21:02:03.425408] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:35.458 [2024-06-09 21:02:03.425766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:19:35.458 21:02:03 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.458 21:02:03 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:35.716 21:02:03 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:35.716 21:02:03 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:35.716 21:02:03 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@12 -- # local i 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:35.716 21:02:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:35.973 /dev/nbd0 00:19:35.973 21:02:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:35.973 21:02:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:35.973 21:02:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:35.973 21:02:03 -- common/autotest_common.sh@857 -- # local i 00:19:35.973 21:02:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:35.973 21:02:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:35.973 21:02:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:35.973 21:02:03 -- common/autotest_common.sh@861 -- # break 00:19:35.973 21:02:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:35.973 21:02:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:35.973 21:02:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:35.973 1+0 records in 00:19:35.973 1+0 records out 00:19:35.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606289 s, 6.8 MB/s 00:19:35.974 21:02:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.974 21:02:04 -- common/autotest_common.sh@874 -- # size=4096 00:19:35.974 21:02:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:35.974 21:02:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:35.974 21:02:04 -- common/autotest_common.sh@877 -- # return 0 00:19:35.974 21:02:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:35.974 21:02:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:35.974 21:02:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:36.231 /dev/nbd1 00:19:36.231 21:02:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:36.231 21:02:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:36.231 21:02:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:36.231 21:02:04 -- common/autotest_common.sh@857 -- # local i 00:19:36.231 21:02:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:36.231 21:02:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:36.231 21:02:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:36.231 21:02:04 -- common/autotest_common.sh@861 -- # break 00:19:36.231 21:02:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:36.231 21:02:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:36.231 21:02:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:36.231 1+0 records in 00:19:36.231 1+0 records out 00:19:36.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000849101 s, 4.8 MB/s 00:19:36.231 21:02:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.231 21:02:04 -- common/autotest_common.sh@874 -- # size=4096 00:19:36.231 21:02:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.231 21:02:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:36.231 21:02:04 -- common/autotest_common.sh@877 -- # return 0 00:19:36.231 21:02:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:36.231 21:02:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:36.231 21:02:04 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:36.489 21:02:04 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:36.489 21:02:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:36.489 21:02:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:36.489 21:02:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:36.489 21:02:04 -- bdev/nbd_common.sh@51 -- # local i 00:19:36.489 21:02:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.489 21:02:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@41 -- # break 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@45 -- # return 0 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:36.747 21:02:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@41 -- # break 00:19:37.005 21:02:05 -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.005 21:02:05 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:37.005 21:02:05 -- bdev/bdev_raid.sh@709 -- # killprocess 121795 00:19:37.005 21:02:05 -- common/autotest_common.sh@926 -- # '[' -z 121795 ']' 00:19:37.005 21:02:05 -- common/autotest_common.sh@930 -- # kill -0 121795 00:19:37.005 21:02:05 -- common/autotest_common.sh@931 -- # uname 00:19:37.005 21:02:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.005 21:02:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121795 00:19:37.005 21:02:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:37.005 21:02:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:37.005 21:02:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121795' 00:19:37.005 killing process with pid 121795 00:19:37.005 21:02:05 -- common/autotest_common.sh@945 -- # kill 121795 00:19:37.005 Received shutdown signal, test time was about 60.000000 seconds 00:19:37.005 00:19:37.005 Latency(us) 00:19:37.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.005 =================================================================================================================== 00:19:37.005 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.005 [2024-06-09 21:02:05.075570] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.005 21:02:05 -- common/autotest_common.sh@950 -- # wait 121795 00:19:37.262 [2024-06-09 21:02:05.269854] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:38.196 00:19:38.196 real 0m21.722s 00:19:38.196 user 0m30.025s 00:19:38.196 sys 0m3.703s 00:19:38.196 21:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.196 21:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:38.196 ************************************ 00:19:38.196 END TEST raid_rebuild_test 00:19:38.196 ************************************ 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:19:38.196 21:02:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:38.196 21:02:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:38.196 21:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:38.196 ************************************ 00:19:38.196 START TEST raid_rebuild_test_sb 00:19:38.196 ************************************ 00:19:38.196 21:02:06 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:38.196 21:02:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@544 -- # raid_pid=122331 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:38.197 21:02:06 -- bdev/bdev_raid.sh@545 -- # waitforlisten 122331 /var/tmp/spdk-raid.sock 00:19:38.197 21:02:06 -- common/autotest_common.sh@819 -- # '[' -z 122331 ']' 00:19:38.197 21:02:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:38.197 21:02:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:38.197 21:02:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:38.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:38.197 21:02:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:38.197 21:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:38.197 [2024-06-09 21:02:06.348564] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:38.197 [2024-06-09 21:02:06.348739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122331 ] 00:19:38.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:38.197 Zero copy mechanism will not be used. 00:19:38.455 [2024-06-09 21:02:06.498187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.713 [2024-06-09 21:02:06.688960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.713 [2024-06-09 21:02:06.857878] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.278 21:02:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:39.278 21:02:07 -- common/autotest_common.sh@852 -- # return 0 00:19:39.278 21:02:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:39.278 21:02:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:39.278 21:02:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:39.536 BaseBdev1_malloc 00:19:39.536 21:02:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:39.795 [2024-06-09 21:02:07.803883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:39.796 [2024-06-09 21:02:07.803991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.796 [2024-06-09 21:02:07.804029] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:39.796 [2024-06-09 21:02:07.804072] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.796 [2024-06-09 21:02:07.806325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.796 [2024-06-09 21:02:07.806391] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.796 BaseBdev1 00:19:39.796 21:02:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:39.796 21:02:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:39.796 21:02:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:40.054 BaseBdev2_malloc 00:19:40.054 21:02:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:40.313 [2024-06-09 21:02:08.292718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:40.313 [2024-06-09 21:02:08.292825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.313 [2024-06-09 21:02:08.292868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:40.313 [2024-06-09 21:02:08.292925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.313 [2024-06-09 21:02:08.295272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.313 [2024-06-09 21:02:08.295340] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:40.313 BaseBdev2 00:19:40.313 21:02:08 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:40.573 spare_malloc 00:19:40.573 21:02:08 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:40.573 spare_delay 00:19:40.573 21:02:08 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:40.831 [2024-06-09 21:02:08.926360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.832 [2024-06-09 21:02:08.926460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.832 [2024-06-09 21:02:08.926507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:19:40.832 [2024-06-09 21:02:08.926550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.832 [2024-06-09 21:02:08.928893] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.832 [2024-06-09 21:02:08.928968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.832 spare 00:19:40.832 21:02:08 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:41.090 [2024-06-09 21:02:09.174447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:41.090 [2024-06-09 21:02:09.176399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:41.090 [2024-06-09 21:02:09.176651] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:41.090 [2024-06-09 21:02:09.176668] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:41.090 [2024-06-09 21:02:09.176839] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:41.090 [2024-06-09 21:02:09.177231] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:41.090 [2024-06-09 21:02:09.177258] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:19:41.090 [2024-06-09 21:02:09.177468] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.090 21:02:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.349 21:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.349 "name": "raid_bdev1", 00:19:41.349 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:41.349 "strip_size_kb": 0, 00:19:41.349 "state": "online", 00:19:41.349 "raid_level": "raid1", 00:19:41.349 "superblock": true, 00:19:41.349 "num_base_bdevs": 2, 00:19:41.349 "num_base_bdevs_discovered": 2, 00:19:41.349 "num_base_bdevs_operational": 2, 00:19:41.349 "base_bdevs_list": [ 00:19:41.349 { 00:19:41.349 "name": "BaseBdev1", 00:19:41.349 "uuid": "ab159340-a6d4-5beb-94f5-9d121bce5944", 00:19:41.349 "is_configured": true, 00:19:41.349 "data_offset": 2048, 00:19:41.349 "data_size": 63488 00:19:41.349 }, 00:19:41.349 { 00:19:41.349 "name": "BaseBdev2", 00:19:41.349 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:41.349 "is_configured": true, 00:19:41.349 "data_offset": 2048, 00:19:41.349 "data_size": 63488 00:19:41.349 } 00:19:41.349 ] 00:19:41.349 }' 00:19:41.349 21:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.349 21:02:09 -- common/autotest_common.sh@10 -- # set +x 00:19:41.916 21:02:09 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:41.916 21:02:09 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:42.175 [2024-06-09 21:02:10.178826] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.175 21:02:10 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:42.175 21:02:10 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.175 21:02:10 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:42.434 21:02:10 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:42.434 21:02:10 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:42.434 21:02:10 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:42.434 21:02:10 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@12 -- # local i 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.434 21:02:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:42.693 [2024-06-09 21:02:10.670747] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:42.693 /dev/nbd0 00:19:42.693 21:02:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:42.693 21:02:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:42.693 21:02:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:42.693 21:02:10 -- common/autotest_common.sh@857 -- # local i 00:19:42.693 21:02:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:42.693 21:02:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:42.693 21:02:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:42.693 21:02:10 -- common/autotest_common.sh@861 -- # break 00:19:42.693 21:02:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:42.693 21:02:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:42.693 21:02:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:42.693 1+0 records in 00:19:42.693 1+0 records out 00:19:42.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246177 s, 16.6 MB/s 00:19:42.693 21:02:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.693 21:02:10 -- common/autotest_common.sh@874 -- # size=4096 00:19:42.693 21:02:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:42.693 21:02:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:42.693 21:02:10 -- common/autotest_common.sh@877 -- # return 0 00:19:42.693 21:02:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:42.693 21:02:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:42.693 21:02:10 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:42.693 21:02:10 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:42.693 21:02:10 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:46.907 63488+0 records in 00:19:46.907 63488+0 records out 00:19:46.907 32505856 bytes (33 MB, 31 MiB) copied, 4.03479 s, 8.1 MB/s 00:19:46.907 21:02:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:46.907 21:02:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:46.907 21:02:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:46.907 21:02:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.907 21:02:14 -- bdev/nbd_common.sh@51 -- # local i 00:19:46.907 21:02:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.907 21:02:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:46.907 [2024-06-09 21:02:15.043698] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@41 -- # break 00:19:46.907 21:02:15 -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.907 21:02:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:47.165 [2024-06-09 21:02:15.271417] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.165 21:02:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.423 21:02:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.423 "name": "raid_bdev1", 00:19:47.423 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:47.423 "strip_size_kb": 0, 00:19:47.423 "state": "online", 00:19:47.423 "raid_level": "raid1", 00:19:47.423 "superblock": true, 00:19:47.423 "num_base_bdevs": 2, 00:19:47.423 "num_base_bdevs_discovered": 1, 00:19:47.423 "num_base_bdevs_operational": 1, 00:19:47.423 "base_bdevs_list": [ 00:19:47.423 { 00:19:47.423 "name": null, 00:19:47.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.423 "is_configured": false, 00:19:47.423 "data_offset": 2048, 00:19:47.423 "data_size": 63488 00:19:47.423 }, 00:19:47.423 { 00:19:47.423 "name": "BaseBdev2", 00:19:47.423 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:47.423 "is_configured": true, 00:19:47.423 "data_offset": 2048, 00:19:47.423 "data_size": 63488 00:19:47.423 } 00:19:47.423 ] 00:19:47.423 }' 00:19:47.423 21:02:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.423 21:02:15 -- common/autotest_common.sh@10 -- # set +x 00:19:47.989 21:02:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.247 [2024-06-09 21:02:16.307598] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:48.247 [2024-06-09 21:02:16.307643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.247 [2024-06-09 21:02:16.320249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:19:48.247 [2024-06-09 21:02:16.322340] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.247 21:02:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:49.183 21:02:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.183 21:02:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:49.183 21:02:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:49.183 21:02:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:49.183 21:02:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:49.183 21:02:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.183 21:02:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.442 21:02:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:49.442 "name": "raid_bdev1", 00:19:49.442 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:49.442 "strip_size_kb": 0, 00:19:49.442 "state": "online", 00:19:49.442 "raid_level": "raid1", 00:19:49.442 "superblock": true, 00:19:49.442 "num_base_bdevs": 2, 00:19:49.442 "num_base_bdevs_discovered": 2, 00:19:49.442 "num_base_bdevs_operational": 2, 00:19:49.442 "process": { 00:19:49.442 "type": "rebuild", 00:19:49.442 "target": "spare", 00:19:49.442 "progress": { 00:19:49.442 "blocks": 24576, 00:19:49.442 "percent": 38 00:19:49.442 } 00:19:49.442 }, 00:19:49.442 "base_bdevs_list": [ 00:19:49.442 { 00:19:49.442 "name": "spare", 00:19:49.442 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:19:49.442 "is_configured": true, 00:19:49.442 "data_offset": 2048, 00:19:49.442 "data_size": 63488 00:19:49.442 }, 00:19:49.442 { 00:19:49.442 "name": "BaseBdev2", 00:19:49.442 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:49.442 "is_configured": true, 00:19:49.442 "data_offset": 2048, 00:19:49.442 "data_size": 63488 00:19:49.442 } 00:19:49.442 ] 00:19:49.442 }' 00:19:49.442 21:02:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:49.701 21:02:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.701 21:02:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:49.701 21:02:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.701 21:02:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:49.960 [2024-06-09 21:02:17.904055] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:49.960 [2024-06-09 21:02:17.931155] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:49.960 [2024-06-09 21:02:17.931306] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.960 21:02:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.219 21:02:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.219 "name": "raid_bdev1", 00:19:50.219 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:50.219 "strip_size_kb": 0, 00:19:50.219 "state": "online", 00:19:50.219 "raid_level": "raid1", 00:19:50.219 "superblock": true, 00:19:50.219 "num_base_bdevs": 2, 00:19:50.219 "num_base_bdevs_discovered": 1, 00:19:50.219 "num_base_bdevs_operational": 1, 00:19:50.219 "base_bdevs_list": [ 00:19:50.219 { 00:19:50.219 "name": null, 00:19:50.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.219 "is_configured": false, 00:19:50.219 "data_offset": 2048, 00:19:50.219 "data_size": 63488 00:19:50.219 }, 00:19:50.219 { 00:19:50.219 "name": "BaseBdev2", 00:19:50.219 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:50.219 "is_configured": true, 00:19:50.219 "data_offset": 2048, 00:19:50.219 "data_size": 63488 00:19:50.219 } 00:19:50.219 ] 00:19:50.219 }' 00:19:50.219 21:02:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.219 21:02:18 -- common/autotest_common.sh@10 -- # set +x 00:19:50.788 21:02:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.788 21:02:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:50.788 21:02:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:50.788 21:02:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:50.788 21:02:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:50.788 21:02:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.788 21:02:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.046 21:02:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:51.046 "name": "raid_bdev1", 00:19:51.046 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:51.046 "strip_size_kb": 0, 00:19:51.046 "state": "online", 00:19:51.046 "raid_level": "raid1", 00:19:51.046 "superblock": true, 00:19:51.046 "num_base_bdevs": 2, 00:19:51.046 "num_base_bdevs_discovered": 1, 00:19:51.046 "num_base_bdevs_operational": 1, 00:19:51.046 "base_bdevs_list": [ 00:19:51.046 { 00:19:51.046 "name": null, 00:19:51.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.046 "is_configured": false, 00:19:51.046 "data_offset": 2048, 00:19:51.046 "data_size": 63488 00:19:51.046 }, 00:19:51.046 { 00:19:51.046 "name": "BaseBdev2", 00:19:51.046 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:51.046 "is_configured": true, 00:19:51.046 "data_offset": 2048, 00:19:51.046 "data_size": 63488 00:19:51.046 } 00:19:51.046 ] 00:19:51.046 }' 00:19:51.046 21:02:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:51.046 21:02:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:51.046 21:02:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:51.046 21:02:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:51.046 21:02:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.304 [2024-06-09 21:02:19.430494] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:51.304 [2024-06-09 21:02:19.430556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.304 [2024-06-09 21:02:19.442661] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:19:51.304 [2024-06-09 21:02:19.444701] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.304 21:02:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.681 "name": "raid_bdev1", 00:19:52.681 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:52.681 "strip_size_kb": 0, 00:19:52.681 "state": "online", 00:19:52.681 "raid_level": "raid1", 00:19:52.681 "superblock": true, 00:19:52.681 "num_base_bdevs": 2, 00:19:52.681 "num_base_bdevs_discovered": 2, 00:19:52.681 "num_base_bdevs_operational": 2, 00:19:52.681 "process": { 00:19:52.681 "type": "rebuild", 00:19:52.681 "target": "spare", 00:19:52.681 "progress": { 00:19:52.681 "blocks": 24576, 00:19:52.681 "percent": 38 00:19:52.681 } 00:19:52.681 }, 00:19:52.681 "base_bdevs_list": [ 00:19:52.681 { 00:19:52.681 "name": "spare", 00:19:52.681 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:19:52.681 "is_configured": true, 00:19:52.681 "data_offset": 2048, 00:19:52.681 "data_size": 63488 00:19:52.681 }, 00:19:52.681 { 00:19:52.681 "name": "BaseBdev2", 00:19:52.681 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:52.681 "is_configured": true, 00:19:52.681 "data_offset": 2048, 00:19:52.681 "data_size": 63488 00:19:52.681 } 00:19:52.681 ] 00:19:52.681 }' 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:52.681 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@657 -- # local timeout=412 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.681 21:02:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.939 21:02:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.939 "name": "raid_bdev1", 00:19:52.939 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:52.939 "strip_size_kb": 0, 00:19:52.939 "state": "online", 00:19:52.939 "raid_level": "raid1", 00:19:52.939 "superblock": true, 00:19:52.939 "num_base_bdevs": 2, 00:19:52.939 "num_base_bdevs_discovered": 2, 00:19:52.939 "num_base_bdevs_operational": 2, 00:19:52.939 "process": { 00:19:52.939 "type": "rebuild", 00:19:52.939 "target": "spare", 00:19:52.939 "progress": { 00:19:52.939 "blocks": 30720, 00:19:52.939 "percent": 48 00:19:52.939 } 00:19:52.939 }, 00:19:52.939 "base_bdevs_list": [ 00:19:52.940 { 00:19:52.940 "name": "spare", 00:19:52.940 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:19:52.940 "is_configured": true, 00:19:52.940 "data_offset": 2048, 00:19:52.940 "data_size": 63488 00:19:52.940 }, 00:19:52.940 { 00:19:52.940 "name": "BaseBdev2", 00:19:52.940 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:52.940 "is_configured": true, 00:19:52.940 "data_offset": 2048, 00:19:52.940 "data_size": 63488 00:19:52.940 } 00:19:52.940 ] 00:19:52.940 }' 00:19:52.940 21:02:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.940 21:02:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.940 21:02:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:53.198 21:02:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.198 21:02:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.145 21:02:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.403 21:02:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:54.403 "name": "raid_bdev1", 00:19:54.403 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:54.403 "strip_size_kb": 0, 00:19:54.403 "state": "online", 00:19:54.403 "raid_level": "raid1", 00:19:54.403 "superblock": true, 00:19:54.403 "num_base_bdevs": 2, 00:19:54.403 "num_base_bdevs_discovered": 2, 00:19:54.403 "num_base_bdevs_operational": 2, 00:19:54.403 "process": { 00:19:54.403 "type": "rebuild", 00:19:54.403 "target": "spare", 00:19:54.403 "progress": { 00:19:54.403 "blocks": 57344, 00:19:54.403 "percent": 90 00:19:54.403 } 00:19:54.403 }, 00:19:54.403 "base_bdevs_list": [ 00:19:54.403 { 00:19:54.403 "name": "spare", 00:19:54.403 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:19:54.403 "is_configured": true, 00:19:54.403 "data_offset": 2048, 00:19:54.403 "data_size": 63488 00:19:54.403 }, 00:19:54.403 { 00:19:54.403 "name": "BaseBdev2", 00:19:54.403 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:54.403 "is_configured": true, 00:19:54.403 "data_offset": 2048, 00:19:54.403 "data_size": 63488 00:19:54.403 } 00:19:54.403 ] 00:19:54.403 }' 00:19:54.403 21:02:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:54.403 21:02:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.403 21:02:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:54.403 21:02:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.403 21:02:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:54.404 [2024-06-09 21:02:22.560996] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:54.404 [2024-06-09 21:02:22.561087] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:54.404 [2024-06-09 21:02:22.561209] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.339 21:02:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.599 "name": "raid_bdev1", 00:19:55.599 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:55.599 "strip_size_kb": 0, 00:19:55.599 "state": "online", 00:19:55.599 "raid_level": "raid1", 00:19:55.599 "superblock": true, 00:19:55.599 "num_base_bdevs": 2, 00:19:55.599 "num_base_bdevs_discovered": 2, 00:19:55.599 "num_base_bdevs_operational": 2, 00:19:55.599 "base_bdevs_list": [ 00:19:55.599 { 00:19:55.599 "name": "spare", 00:19:55.599 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:19:55.599 "is_configured": true, 00:19:55.599 "data_offset": 2048, 00:19:55.599 "data_size": 63488 00:19:55.599 }, 00:19:55.599 { 00:19:55.599 "name": "BaseBdev2", 00:19:55.599 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:55.599 "is_configured": true, 00:19:55.599 "data_offset": 2048, 00:19:55.599 "data_size": 63488 00:19:55.599 } 00:19:55.599 ] 00:19:55.599 }' 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@660 -- # break 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.599 21:02:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.859 21:02:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.859 "name": "raid_bdev1", 00:19:55.859 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:55.859 "strip_size_kb": 0, 00:19:55.859 "state": "online", 00:19:55.859 "raid_level": "raid1", 00:19:55.859 "superblock": true, 00:19:55.859 "num_base_bdevs": 2, 00:19:55.859 "num_base_bdevs_discovered": 2, 00:19:55.859 "num_base_bdevs_operational": 2, 00:19:55.859 "base_bdevs_list": [ 00:19:55.859 { 00:19:55.859 "name": "spare", 00:19:55.859 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:19:55.859 "is_configured": true, 00:19:55.859 "data_offset": 2048, 00:19:55.859 "data_size": 63488 00:19:55.859 }, 00:19:55.859 { 00:19:55.859 "name": "BaseBdev2", 00:19:55.859 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:55.859 "is_configured": true, 00:19:55.859 "data_offset": 2048, 00:19:55.859 "data_size": 63488 00:19:55.859 } 00:19:55.859 ] 00:19:55.859 }' 00:19:55.859 21:02:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.117 21:02:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.375 21:02:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.375 "name": "raid_bdev1", 00:19:56.375 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:19:56.375 "strip_size_kb": 0, 00:19:56.375 "state": "online", 00:19:56.375 "raid_level": "raid1", 00:19:56.375 "superblock": true, 00:19:56.375 "num_base_bdevs": 2, 00:19:56.375 "num_base_bdevs_discovered": 2, 00:19:56.375 "num_base_bdevs_operational": 2, 00:19:56.375 "base_bdevs_list": [ 00:19:56.375 { 00:19:56.375 "name": "spare", 00:19:56.375 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:19:56.375 "is_configured": true, 00:19:56.375 "data_offset": 2048, 00:19:56.375 "data_size": 63488 00:19:56.375 }, 00:19:56.375 { 00:19:56.375 "name": "BaseBdev2", 00:19:56.375 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:19:56.375 "is_configured": true, 00:19:56.375 "data_offset": 2048, 00:19:56.375 "data_size": 63488 00:19:56.375 } 00:19:56.375 ] 00:19:56.375 }' 00:19:56.375 21:02:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.375 21:02:24 -- common/autotest_common.sh@10 -- # set +x 00:19:56.942 21:02:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:57.200 [2024-06-09 21:02:25.131711] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.200 [2024-06-09 21:02:25.131748] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.200 [2024-06-09 21:02:25.131835] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.200 [2024-06-09 21:02:25.131911] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.200 [2024-06-09 21:02:25.131925] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:19:57.200 21:02:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.200 21:02:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:57.458 21:02:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:57.458 21:02:25 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:57.458 21:02:25 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@12 -- # local i 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:57.458 21:02:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:57.715 /dev/nbd0 00:19:57.715 21:02:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:57.715 21:02:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:57.715 21:02:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:57.715 21:02:25 -- common/autotest_common.sh@857 -- # local i 00:19:57.715 21:02:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:57.715 21:02:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:57.715 21:02:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:57.715 21:02:25 -- common/autotest_common.sh@861 -- # break 00:19:57.715 21:02:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:57.715 21:02:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:57.715 21:02:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.715 1+0 records in 00:19:57.715 1+0 records out 00:19:57.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204411 s, 20.0 MB/s 00:19:57.715 21:02:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.715 21:02:25 -- common/autotest_common.sh@874 -- # size=4096 00:19:57.715 21:02:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.715 21:02:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:57.715 21:02:25 -- common/autotest_common.sh@877 -- # return 0 00:19:57.715 21:02:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.715 21:02:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:57.715 21:02:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:57.973 /dev/nbd1 00:19:57.973 21:02:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:57.973 21:02:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:57.973 21:02:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:57.973 21:02:25 -- common/autotest_common.sh@857 -- # local i 00:19:57.973 21:02:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:57.973 21:02:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:57.973 21:02:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:57.973 21:02:25 -- common/autotest_common.sh@861 -- # break 00:19:57.973 21:02:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:57.973 21:02:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:57.973 21:02:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.973 1+0 records in 00:19:57.973 1+0 records out 00:19:57.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316558 s, 12.9 MB/s 00:19:57.973 21:02:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.973 21:02:25 -- common/autotest_common.sh@874 -- # size=4096 00:19:57.973 21:02:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.973 21:02:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:57.973 21:02:25 -- common/autotest_common.sh@877 -- # return 0 00:19:57.973 21:02:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.973 21:02:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:57.973 21:02:25 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:57.973 21:02:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:57.973 21:02:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.973 21:02:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:57.973 21:02:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.973 21:02:26 -- bdev/nbd_common.sh@51 -- # local i 00:19:57.973 21:02:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.973 21:02:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@41 -- # break 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.232 21:02:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.233 21:02:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@41 -- # break 00:19:58.491 21:02:26 -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.491 21:02:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:58.491 21:02:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:58.491 21:02:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:58.491 21:02:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:58.750 21:02:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:59.009 [2024-06-09 21:02:26.983225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:59.009 [2024-06-09 21:02:26.983343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.009 [2024-06-09 21:02:26.983381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:59.009 [2024-06-09 21:02:26.983409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.009 [2024-06-09 21:02:26.985835] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.009 [2024-06-09 21:02:26.985942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:59.009 [2024-06-09 21:02:26.986084] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:59.009 [2024-06-09 21:02:26.986179] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.009 BaseBdev1 00:19:59.009 21:02:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:59.009 21:02:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:19:59.009 21:02:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:19:59.009 21:02:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:59.268 [2024-06-09 21:02:27.351885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:59.268 [2024-06-09 21:02:27.351997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.268 [2024-06-09 21:02:27.352035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:59.268 [2024-06-09 21:02:27.352064] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.268 [2024-06-09 21:02:27.352557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.268 [2024-06-09 21:02:27.352630] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:59.268 [2024-06-09 21:02:27.352770] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:19:59.268 [2024-06-09 21:02:27.352786] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:19:59.268 [2024-06-09 21:02:27.352794] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.268 [2024-06-09 21:02:27.352842] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:19:59.268 [2024-06-09 21:02:27.352915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.268 BaseBdev2 00:19:59.268 21:02:27 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:59.526 21:02:27 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:59.785 [2024-06-09 21:02:27.883999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:59.785 [2024-06-09 21:02:27.884091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.785 [2024-06-09 21:02:27.884133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:59.785 [2024-06-09 21:02:27.884157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.785 [2024-06-09 21:02:27.884717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.785 [2024-06-09 21:02:27.884776] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:59.785 [2024-06-09 21:02:27.884929] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:59.785 [2024-06-09 21:02:27.884989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.785 spare 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.785 21:02:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.044 [2024-06-09 21:02:27.985103] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:00.044 [2024-06-09 21:02:27.985129] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:00.044 [2024-06-09 21:02:27.985330] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:20:00.044 [2024-06-09 21:02:27.985800] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:00.044 [2024-06-09 21:02:27.985824] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:00.044 [2024-06-09 21:02:27.986018] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.044 21:02:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.044 "name": "raid_bdev1", 00:20:00.044 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:20:00.044 "strip_size_kb": 0, 00:20:00.044 "state": "online", 00:20:00.044 "raid_level": "raid1", 00:20:00.044 "superblock": true, 00:20:00.044 "num_base_bdevs": 2, 00:20:00.044 "num_base_bdevs_discovered": 2, 00:20:00.044 "num_base_bdevs_operational": 2, 00:20:00.044 "base_bdevs_list": [ 00:20:00.044 { 00:20:00.044 "name": "spare", 00:20:00.044 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:20:00.044 "is_configured": true, 00:20:00.044 "data_offset": 2048, 00:20:00.044 "data_size": 63488 00:20:00.044 }, 00:20:00.044 { 00:20:00.044 "name": "BaseBdev2", 00:20:00.044 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:20:00.044 "is_configured": true, 00:20:00.044 "data_offset": 2048, 00:20:00.044 "data_size": 63488 00:20:00.044 } 00:20:00.044 ] 00:20:00.044 }' 00:20:00.044 21:02:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.044 21:02:28 -- common/autotest_common.sh@10 -- # set +x 00:20:00.980 21:02:28 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.980 21:02:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:00.980 21:02:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:00.980 21:02:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:00.980 21:02:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:00.980 21:02:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.980 21:02:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.980 21:02:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:00.980 "name": "raid_bdev1", 00:20:00.980 "uuid": "e440edcb-b34c-46af-aa2b-f728a44d4eac", 00:20:00.980 "strip_size_kb": 0, 00:20:00.980 "state": "online", 00:20:00.980 "raid_level": "raid1", 00:20:00.980 "superblock": true, 00:20:00.980 "num_base_bdevs": 2, 00:20:00.980 "num_base_bdevs_discovered": 2, 00:20:00.980 "num_base_bdevs_operational": 2, 00:20:00.980 "base_bdevs_list": [ 00:20:00.980 { 00:20:00.980 "name": "spare", 00:20:00.980 "uuid": "a80845b5-3537-5fcf-a35f-cd404bcb105d", 00:20:00.980 "is_configured": true, 00:20:00.980 "data_offset": 2048, 00:20:00.980 "data_size": 63488 00:20:00.980 }, 00:20:00.980 { 00:20:00.980 "name": "BaseBdev2", 00:20:00.980 "uuid": "a3adb1d5-f17e-5e0b-9661-29da111dcb63", 00:20:00.980 "is_configured": true, 00:20:00.980 "data_offset": 2048, 00:20:00.980 "data_size": 63488 00:20:00.980 } 00:20:00.980 ] 00:20:00.980 }' 00:20:00.980 21:02:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:00.980 21:02:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:00.980 21:02:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:01.239 21:02:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:01.239 21:02:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.239 21:02:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:01.239 21:02:29 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.239 21:02:29 -- bdev/bdev_raid.sh@709 -- # killprocess 122331 00:20:01.239 21:02:29 -- common/autotest_common.sh@926 -- # '[' -z 122331 ']' 00:20:01.239 21:02:29 -- common/autotest_common.sh@930 -- # kill -0 122331 00:20:01.239 21:02:29 -- common/autotest_common.sh@931 -- # uname 00:20:01.239 21:02:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:01.239 21:02:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122331 00:20:01.239 killing process with pid 122331 00:20:01.239 Received shutdown signal, test time was about 60.000000 seconds 00:20:01.239 00:20:01.239 Latency(us) 00:20:01.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.239 =================================================================================================================== 00:20:01.239 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:01.239 21:02:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:01.239 21:02:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:01.239 21:02:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122331' 00:20:01.239 21:02:29 -- common/autotest_common.sh@945 -- # kill 122331 00:20:01.239 21:02:29 -- common/autotest_common.sh@950 -- # wait 122331 00:20:01.239 [2024-06-09 21:02:29.412168] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.239 [2024-06-09 21:02:29.412256] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:01.239 [2024-06-09 21:02:29.412352] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:01.239 [2024-06-09 21:02:29.412371] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:01.498 [2024-06-09 21:02:29.609410] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.433 ************************************ 00:20:02.433 END TEST raid_rebuild_test_sb 00:20:02.433 ************************************ 00:20:02.433 21:02:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:02.433 00:20:02.433 real 0m24.293s 00:20:02.433 user 0m36.096s 00:20:02.433 sys 0m3.328s 00:20:02.433 21:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.433 21:02:30 -- common/autotest_common.sh@10 -- # set +x 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:02.692 21:02:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:02.692 21:02:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:02.692 21:02:30 -- common/autotest_common.sh@10 -- # set +x 00:20:02.692 ************************************ 00:20:02.692 START TEST raid_rebuild_test_io 00:20:02.692 ************************************ 00:20:02.692 21:02:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=122952 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 122952 /var/tmp/spdk-raid.sock 00:20:02.692 21:02:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:02.692 21:02:30 -- common/autotest_common.sh@819 -- # '[' -z 122952 ']' 00:20:02.692 21:02:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:02.692 21:02:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.692 21:02:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:02.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:02.692 21:02:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.692 21:02:30 -- common/autotest_common.sh@10 -- # set +x 00:20:02.692 [2024-06-09 21:02:30.708649] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:02.692 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:02.692 Zero copy mechanism will not be used. 00:20:02.692 [2024-06-09 21:02:30.708843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122952 ] 00:20:02.951 [2024-06-09 21:02:30.874561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.951 [2024-06-09 21:02:31.071489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.210 [2024-06-09 21:02:31.249163] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.777 21:02:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:03.777 21:02:31 -- common/autotest_common.sh@852 -- # return 0 00:20:03.777 21:02:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:03.777 21:02:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:03.777 21:02:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:03.777 BaseBdev1 00:20:03.777 21:02:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:03.777 21:02:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:03.777 21:02:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:04.049 BaseBdev2 00:20:04.050 21:02:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:04.322 spare_malloc 00:20:04.322 21:02:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:04.579 spare_delay 00:20:04.579 21:02:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:04.838 [2024-06-09 21:02:32.803120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.838 [2024-06-09 21:02:32.803223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.838 [2024-06-09 21:02:32.803265] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:04.838 [2024-06-09 21:02:32.803329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.838 [2024-06-09 21:02:32.805642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.838 [2024-06-09 21:02:32.805697] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.838 spare 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:04.838 [2024-06-09 21:02:32.987278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.838 [2024-06-09 21:02:32.989197] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.838 [2024-06-09 21:02:32.989291] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:04.838 [2024-06-09 21:02:32.989303] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:04.838 [2024-06-09 21:02:32.989441] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:04.838 [2024-06-09 21:02:32.989806] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:04.838 [2024-06-09 21:02:32.990108] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:04.838 [2024-06-09 21:02:32.990399] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.838 21:02:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.838 21:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.838 21:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.838 21:02:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.096 21:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.096 "name": "raid_bdev1", 00:20:05.096 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:05.096 "strip_size_kb": 0, 00:20:05.096 "state": "online", 00:20:05.096 "raid_level": "raid1", 00:20:05.096 "superblock": false, 00:20:05.096 "num_base_bdevs": 2, 00:20:05.096 "num_base_bdevs_discovered": 2, 00:20:05.096 "num_base_bdevs_operational": 2, 00:20:05.096 "base_bdevs_list": [ 00:20:05.096 { 00:20:05.096 "name": "BaseBdev1", 00:20:05.096 "uuid": "361a0fd1-b461-4969-8480-319e9a64bd80", 00:20:05.096 "is_configured": true, 00:20:05.096 "data_offset": 0, 00:20:05.096 "data_size": 65536 00:20:05.096 }, 00:20:05.096 { 00:20:05.096 "name": "BaseBdev2", 00:20:05.096 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:05.096 "is_configured": true, 00:20:05.096 "data_offset": 0, 00:20:05.096 "data_size": 65536 00:20:05.096 } 00:20:05.096 ] 00:20:05.096 }' 00:20:05.096 21:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.096 21:02:33 -- common/autotest_common.sh@10 -- # set +x 00:20:06.029 21:02:33 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:06.029 21:02:33 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:06.029 [2024-06-09 21:02:34.059587] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.029 21:02:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:06.029 21:02:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.029 21:02:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:06.288 21:02:34 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:06.288 21:02:34 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:06.288 21:02:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:06.288 21:02:34 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:06.288 [2024-06-09 21:02:34.378894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:06.288 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:06.288 Zero copy mechanism will not be used. 00:20:06.288 Running I/O for 60 seconds... 00:20:06.548 [2024-06-09 21:02:34.520822] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:06.548 [2024-06-09 21:02:34.527126] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.548 21:02:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.809 21:02:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.809 "name": "raid_bdev1", 00:20:06.809 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:06.809 "strip_size_kb": 0, 00:20:06.809 "state": "online", 00:20:06.809 "raid_level": "raid1", 00:20:06.809 "superblock": false, 00:20:06.809 "num_base_bdevs": 2, 00:20:06.809 "num_base_bdevs_discovered": 1, 00:20:06.809 "num_base_bdevs_operational": 1, 00:20:06.809 "base_bdevs_list": [ 00:20:06.809 { 00:20:06.809 "name": null, 00:20:06.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.809 "is_configured": false, 00:20:06.809 "data_offset": 0, 00:20:06.809 "data_size": 65536 00:20:06.809 }, 00:20:06.809 { 00:20:06.809 "name": "BaseBdev2", 00:20:06.809 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:06.809 "is_configured": true, 00:20:06.809 "data_offset": 0, 00:20:06.809 "data_size": 65536 00:20:06.809 } 00:20:06.809 ] 00:20:06.809 }' 00:20:06.809 21:02:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.809 21:02:34 -- common/autotest_common.sh@10 -- # set +x 00:20:07.375 21:02:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:07.634 [2024-06-09 21:02:35.626582] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:07.634 [2024-06-09 21:02:35.626957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.634 21:02:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:07.634 [2024-06-09 21:02:35.674047] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:07.634 [2024-06-09 21:02:35.676357] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:07.634 [2024-06-09 21:02:35.784937] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:07.634 [2024-06-09 21:02:35.785527] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:07.892 [2024-06-09 21:02:36.032417] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:07.892 [2024-06-09 21:02:36.042536] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.458 [2024-06-09 21:02:36.361296] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:08.458 [2024-06-09 21:02:36.576326] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:08.717 21:02:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.717 21:02:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:08.717 21:02:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:08.717 21:02:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:08.717 21:02:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:08.717 21:02:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.717 21:02:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.717 [2024-06-09 21:02:36.827689] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:08.977 21:02:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.977 "name": "raid_bdev1", 00:20:08.977 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:08.977 "strip_size_kb": 0, 00:20:08.977 "state": "online", 00:20:08.977 "raid_level": "raid1", 00:20:08.977 "superblock": false, 00:20:08.977 "num_base_bdevs": 2, 00:20:08.977 "num_base_bdevs_discovered": 2, 00:20:08.977 "num_base_bdevs_operational": 2, 00:20:08.977 "process": { 00:20:08.977 "type": "rebuild", 00:20:08.977 "target": "spare", 00:20:08.977 "progress": { 00:20:08.977 "blocks": 14336, 00:20:08.977 "percent": 21 00:20:08.977 } 00:20:08.977 }, 00:20:08.977 "base_bdevs_list": [ 00:20:08.977 { 00:20:08.977 "name": "spare", 00:20:08.977 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:08.977 "is_configured": true, 00:20:08.977 "data_offset": 0, 00:20:08.977 "data_size": 65536 00:20:08.977 }, 00:20:08.977 { 00:20:08.977 "name": "BaseBdev2", 00:20:08.977 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:08.977 "is_configured": true, 00:20:08.977 "data_offset": 0, 00:20:08.977 "data_size": 65536 00:20:08.977 } 00:20:08.977 ] 00:20:08.977 }' 00:20:08.977 21:02:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.977 [2024-06-09 21:02:36.946123] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:08.977 [2024-06-09 21:02:36.946535] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:08.977 21:02:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.977 21:02:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.977 21:02:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.977 21:02:37 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:09.236 [2024-06-09 21:02:37.193271] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:09.236 [2024-06-09 21:02:37.286803] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.236 [2024-06-09 21:02:37.287417] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.236 [2024-06-09 21:02:37.388398] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:09.236 [2024-06-09 21:02:37.396380] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.495 [2024-06-09 21:02:37.433321] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:09.495 "name": "raid_bdev1", 00:20:09.495 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:09.495 "strip_size_kb": 0, 00:20:09.495 "state": "online", 00:20:09.495 "raid_level": "raid1", 00:20:09.495 "superblock": false, 00:20:09.495 "num_base_bdevs": 2, 00:20:09.495 "num_base_bdevs_discovered": 1, 00:20:09.495 "num_base_bdevs_operational": 1, 00:20:09.495 "base_bdevs_list": [ 00:20:09.495 { 00:20:09.495 "name": null, 00:20:09.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.495 "is_configured": false, 00:20:09.495 "data_offset": 0, 00:20:09.495 "data_size": 65536 00:20:09.495 }, 00:20:09.495 { 00:20:09.495 "name": "BaseBdev2", 00:20:09.495 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:09.495 "is_configured": true, 00:20:09.495 "data_offset": 0, 00:20:09.495 "data_size": 65536 00:20:09.495 } 00:20:09.495 ] 00:20:09.495 }' 00:20:09.495 21:02:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:09.495 21:02:37 -- common/autotest_common.sh@10 -- # set +x 00:20:10.430 21:02:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.430 21:02:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:10.430 21:02:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:10.430 21:02:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:10.430 21:02:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:10.430 21:02:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.430 21:02:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.431 21:02:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:10.431 "name": "raid_bdev1", 00:20:10.431 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:10.431 "strip_size_kb": 0, 00:20:10.431 "state": "online", 00:20:10.431 "raid_level": "raid1", 00:20:10.431 "superblock": false, 00:20:10.431 "num_base_bdevs": 2, 00:20:10.431 "num_base_bdevs_discovered": 1, 00:20:10.431 "num_base_bdevs_operational": 1, 00:20:10.431 "base_bdevs_list": [ 00:20:10.431 { 00:20:10.431 "name": null, 00:20:10.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.431 "is_configured": false, 00:20:10.431 "data_offset": 0, 00:20:10.431 "data_size": 65536 00:20:10.431 }, 00:20:10.431 { 00:20:10.431 "name": "BaseBdev2", 00:20:10.431 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:10.431 "is_configured": true, 00:20:10.431 "data_offset": 0, 00:20:10.431 "data_size": 65536 00:20:10.431 } 00:20:10.431 ] 00:20:10.431 }' 00:20:10.431 21:02:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:10.689 21:02:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:10.689 21:02:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:10.689 21:02:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:10.689 21:02:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:10.689 [2024-06-09 21:02:38.864406] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:10.689 [2024-06-09 21:02:38.864694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:10.947 21:02:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:10.947 [2024-06-09 21:02:38.904071] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:10.947 [2024-06-09 21:02:38.906104] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:10.947 [2024-06-09 21:02:39.014747] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:10.947 [2024-06-09 21:02:39.015579] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:11.513 [2024-06-09 21:02:39.489065] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:11.514 [2024-06-09 21:02:39.489733] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:11.771 [2024-06-09 21:02:39.698816] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:11.771 [2024-06-09 21:02:39.699105] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:11.771 21:02:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.771 21:02:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:11.771 21:02:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:11.771 21:02:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:11.771 21:02:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:11.771 21:02:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.771 21:02:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.771 [2024-06-09 21:02:39.923504] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:12.030 [2024-06-09 21:02:40.040861] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:12.030 [2024-06-09 21:02:40.041319] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:12.030 21:02:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.030 "name": "raid_bdev1", 00:20:12.030 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:12.030 "strip_size_kb": 0, 00:20:12.030 "state": "online", 00:20:12.030 "raid_level": "raid1", 00:20:12.030 "superblock": false, 00:20:12.030 "num_base_bdevs": 2, 00:20:12.030 "num_base_bdevs_discovered": 2, 00:20:12.030 "num_base_bdevs_operational": 2, 00:20:12.030 "process": { 00:20:12.030 "type": "rebuild", 00:20:12.030 "target": "spare", 00:20:12.030 "progress": { 00:20:12.030 "blocks": 16384, 00:20:12.030 "percent": 25 00:20:12.030 } 00:20:12.030 }, 00:20:12.030 "base_bdevs_list": [ 00:20:12.030 { 00:20:12.030 "name": "spare", 00:20:12.030 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:12.030 "is_configured": true, 00:20:12.030 "data_offset": 0, 00:20:12.030 "data_size": 65536 00:20:12.030 }, 00:20:12.030 { 00:20:12.030 "name": "BaseBdev2", 00:20:12.030 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:12.030 "is_configured": true, 00:20:12.030 "data_offset": 0, 00:20:12.030 "data_size": 65536 00:20:12.030 } 00:20:12.030 ] 00:20:12.030 }' 00:20:12.030 21:02:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:12.030 21:02:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.030 21:02:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@657 -- # local timeout=432 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.288 21:02:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.546 21:02:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.546 "name": "raid_bdev1", 00:20:12.546 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:12.546 "strip_size_kb": 0, 00:20:12.546 "state": "online", 00:20:12.546 "raid_level": "raid1", 00:20:12.546 "superblock": false, 00:20:12.546 "num_base_bdevs": 2, 00:20:12.546 "num_base_bdevs_discovered": 2, 00:20:12.546 "num_base_bdevs_operational": 2, 00:20:12.546 "process": { 00:20:12.546 "type": "rebuild", 00:20:12.546 "target": "spare", 00:20:12.546 "progress": { 00:20:12.546 "blocks": 20480, 00:20:12.546 "percent": 31 00:20:12.546 } 00:20:12.546 }, 00:20:12.546 "base_bdevs_list": [ 00:20:12.546 { 00:20:12.546 "name": "spare", 00:20:12.546 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:12.546 "is_configured": true, 00:20:12.546 "data_offset": 0, 00:20:12.546 "data_size": 65536 00:20:12.546 }, 00:20:12.546 { 00:20:12.546 "name": "BaseBdev2", 00:20:12.546 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:12.546 "is_configured": true, 00:20:12.546 "data_offset": 0, 00:20:12.546 "data_size": 65536 00:20:12.546 } 00:20:12.546 ] 00:20:12.546 }' 00:20:12.546 21:02:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:12.546 21:02:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.546 21:02:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:12.547 21:02:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.547 21:02:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:12.804 [2024-06-09 21:02:40.856243] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:13.062 [2024-06-09 21:02:41.110516] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:13.320 [2024-06-09 21:02:41.325690] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.578 21:02:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.836 [2024-06-09 21:02:41.770552] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:13.836 21:02:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:13.836 "name": "raid_bdev1", 00:20:13.836 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:13.836 "strip_size_kb": 0, 00:20:13.836 "state": "online", 00:20:13.836 "raid_level": "raid1", 00:20:13.836 "superblock": false, 00:20:13.836 "num_base_bdevs": 2, 00:20:13.836 "num_base_bdevs_discovered": 2, 00:20:13.836 "num_base_bdevs_operational": 2, 00:20:13.836 "process": { 00:20:13.836 "type": "rebuild", 00:20:13.836 "target": "spare", 00:20:13.836 "progress": { 00:20:13.836 "blocks": 40960, 00:20:13.836 "percent": 62 00:20:13.836 } 00:20:13.836 }, 00:20:13.836 "base_bdevs_list": [ 00:20:13.836 { 00:20:13.836 "name": "spare", 00:20:13.836 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:13.836 "is_configured": true, 00:20:13.836 "data_offset": 0, 00:20:13.836 "data_size": 65536 00:20:13.836 }, 00:20:13.836 { 00:20:13.836 "name": "BaseBdev2", 00:20:13.836 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:13.836 "is_configured": true, 00:20:13.836 "data_offset": 0, 00:20:13.836 "data_size": 65536 00:20:13.836 } 00:20:13.836 ] 00:20:13.836 }' 00:20:13.836 21:02:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:13.836 21:02:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.836 21:02:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:13.836 21:02:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.836 21:02:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:14.402 [2024-06-09 21:02:42.347053] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.968 21:02:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.968 [2024-06-09 21:02:43.124370] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:15.227 21:02:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:15.227 "name": "raid_bdev1", 00:20:15.227 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:15.227 "strip_size_kb": 0, 00:20:15.227 "state": "online", 00:20:15.227 "raid_level": "raid1", 00:20:15.227 "superblock": false, 00:20:15.227 "num_base_bdevs": 2, 00:20:15.227 "num_base_bdevs_discovered": 2, 00:20:15.227 "num_base_bdevs_operational": 2, 00:20:15.227 "process": { 00:20:15.227 "type": "rebuild", 00:20:15.227 "target": "spare", 00:20:15.227 "progress": { 00:20:15.227 "blocks": 65536, 00:20:15.227 "percent": 100 00:20:15.227 } 00:20:15.227 }, 00:20:15.227 "base_bdevs_list": [ 00:20:15.227 { 00:20:15.227 "name": "spare", 00:20:15.227 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:15.227 "is_configured": true, 00:20:15.227 "data_offset": 0, 00:20:15.227 "data_size": 65536 00:20:15.227 }, 00:20:15.227 { 00:20:15.227 "name": "BaseBdev2", 00:20:15.227 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:15.227 "is_configured": true, 00:20:15.227 "data_offset": 0, 00:20:15.227 "data_size": 65536 00:20:15.227 } 00:20:15.227 ] 00:20:15.227 }' 00:20:15.227 21:02:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:15.227 [2024-06-09 21:02:43.231056] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:15.227 21:02:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.227 [2024-06-09 21:02:43.234069] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.227 21:02:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:15.227 21:02:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.227 21:02:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.161 21:02:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.418 21:02:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.418 "name": "raid_bdev1", 00:20:16.418 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:16.418 "strip_size_kb": 0, 00:20:16.418 "state": "online", 00:20:16.418 "raid_level": "raid1", 00:20:16.418 "superblock": false, 00:20:16.418 "num_base_bdevs": 2, 00:20:16.418 "num_base_bdevs_discovered": 2, 00:20:16.418 "num_base_bdevs_operational": 2, 00:20:16.418 "base_bdevs_list": [ 00:20:16.418 { 00:20:16.418 "name": "spare", 00:20:16.418 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:16.418 "is_configured": true, 00:20:16.418 "data_offset": 0, 00:20:16.418 "data_size": 65536 00:20:16.418 }, 00:20:16.418 { 00:20:16.418 "name": "BaseBdev2", 00:20:16.418 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:16.418 "is_configured": true, 00:20:16.418 "data_offset": 0, 00:20:16.418 "data_size": 65536 00:20:16.418 } 00:20:16.418 ] 00:20:16.418 }' 00:20:16.418 21:02:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.418 21:02:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:16.418 21:02:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@660 -- # break 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.676 21:02:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.934 "name": "raid_bdev1", 00:20:16.934 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:16.934 "strip_size_kb": 0, 00:20:16.934 "state": "online", 00:20:16.934 "raid_level": "raid1", 00:20:16.934 "superblock": false, 00:20:16.934 "num_base_bdevs": 2, 00:20:16.934 "num_base_bdevs_discovered": 2, 00:20:16.934 "num_base_bdevs_operational": 2, 00:20:16.934 "base_bdevs_list": [ 00:20:16.934 { 00:20:16.934 "name": "spare", 00:20:16.934 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:16.934 "is_configured": true, 00:20:16.934 "data_offset": 0, 00:20:16.934 "data_size": 65536 00:20:16.934 }, 00:20:16.934 { 00:20:16.934 "name": "BaseBdev2", 00:20:16.934 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:16.934 "is_configured": true, 00:20:16.934 "data_offset": 0, 00:20:16.934 "data_size": 65536 00:20:16.934 } 00:20:16.934 ] 00:20:16.934 }' 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:16.934 21:02:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.935 21:02:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.935 21:02:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.935 21:02:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.935 21:02:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.935 21:02:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.192 21:02:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.192 "name": "raid_bdev1", 00:20:17.192 "uuid": "4f8aaea6-8c55-4f4f-9cfa-cf07323e8f9c", 00:20:17.192 "strip_size_kb": 0, 00:20:17.192 "state": "online", 00:20:17.192 "raid_level": "raid1", 00:20:17.192 "superblock": false, 00:20:17.192 "num_base_bdevs": 2, 00:20:17.192 "num_base_bdevs_discovered": 2, 00:20:17.192 "num_base_bdevs_operational": 2, 00:20:17.192 "base_bdevs_list": [ 00:20:17.192 { 00:20:17.192 "name": "spare", 00:20:17.192 "uuid": "2bf3e1d1-9bcf-52d2-a13a-2305544e6861", 00:20:17.192 "is_configured": true, 00:20:17.192 "data_offset": 0, 00:20:17.192 "data_size": 65536 00:20:17.192 }, 00:20:17.192 { 00:20:17.192 "name": "BaseBdev2", 00:20:17.192 "uuid": "8718c39c-a969-4993-bb6b-b2bbd0976d62", 00:20:17.192 "is_configured": true, 00:20:17.192 "data_offset": 0, 00:20:17.192 "data_size": 65536 00:20:17.192 } 00:20:17.192 ] 00:20:17.192 }' 00:20:17.192 21:02:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.192 21:02:45 -- common/autotest_common.sh@10 -- # set +x 00:20:17.769 21:02:45 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:18.044 [2024-06-09 21:02:46.039202] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:18.044 [2024-06-09 21:02:46.039471] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.044 00:20:18.044 Latency(us) 00:20:18.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.044 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:18.044 raid_bdev1 : 11.76 114.82 344.45 0.00 0.00 12227.16 284.86 113436.86 00:20:18.044 =================================================================================================================== 00:20:18.044 Total : 114.82 344.45 0.00 0.00 12227.16 284.86 113436.86 00:20:18.044 [2024-06-09 21:02:46.155787] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.044 [2024-06-09 21:02:46.155972] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.044 0 00:20:18.044 [2024-06-09 21:02:46.156089] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.044 [2024-06-09 21:02:46.156105] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:18.044 21:02:46 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:18.044 21:02:46 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.610 21:02:46 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:18.610 21:02:46 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:18.610 21:02:46 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@12 -- # local i 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:18.610 /dev/nbd0 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:18.610 21:02:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:18.610 21:02:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:18.610 21:02:46 -- common/autotest_common.sh@857 -- # local i 00:20:18.610 21:02:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:18.610 21:02:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:18.610 21:02:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:18.610 21:02:46 -- common/autotest_common.sh@861 -- # break 00:20:18.610 21:02:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:18.610 21:02:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:18.610 21:02:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.868 1+0 records in 00:20:18.868 1+0 records out 00:20:18.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530922 s, 7.7 MB/s 00:20:18.868 21:02:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.868 21:02:46 -- common/autotest_common.sh@874 -- # size=4096 00:20:18.868 21:02:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.868 21:02:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:18.868 21:02:46 -- common/autotest_common.sh@877 -- # return 0 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.868 21:02:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:18.868 21:02:46 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:18.868 21:02:46 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@12 -- # local i 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.868 21:02:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:18.868 /dev/nbd1 00:20:18.868 21:02:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:18.868 21:02:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:18.868 21:02:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:18.868 21:02:47 -- common/autotest_common.sh@857 -- # local i 00:20:18.868 21:02:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:18.869 21:02:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:18.869 21:02:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:18.869 21:02:47 -- common/autotest_common.sh@861 -- # break 00:20:18.869 21:02:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:18.869 21:02:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:18.869 21:02:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.869 1+0 records in 00:20:18.869 1+0 records out 00:20:18.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389979 s, 10.5 MB/s 00:20:18.869 21:02:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.869 21:02:47 -- common/autotest_common.sh@874 -- # size=4096 00:20:18.869 21:02:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.869 21:02:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:18.869 21:02:47 -- common/autotest_common.sh@877 -- # return 0 00:20:18.869 21:02:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.869 21:02:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.869 21:02:47 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:19.127 21:02:47 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:19.127 21:02:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.127 21:02:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:19.127 21:02:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.127 21:02:47 -- bdev/nbd_common.sh@51 -- # local i 00:20:19.127 21:02:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.127 21:02:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@41 -- # break 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.385 21:02:47 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@51 -- # local i 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:19.385 21:02:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@41 -- # break 00:20:19.643 21:02:47 -- bdev/nbd_common.sh@45 -- # return 0 00:20:19.643 21:02:47 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:19.643 21:02:47 -- bdev/bdev_raid.sh@709 -- # killprocess 122952 00:20:19.643 21:02:47 -- common/autotest_common.sh@926 -- # '[' -z 122952 ']' 00:20:19.643 21:02:47 -- common/autotest_common.sh@930 -- # kill -0 122952 00:20:19.643 21:02:47 -- common/autotest_common.sh@931 -- # uname 00:20:19.643 21:02:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.643 21:02:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122952 00:20:19.643 21:02:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:19.643 21:02:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:19.643 21:02:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122952' 00:20:19.643 killing process with pid 122952 00:20:19.643 21:02:47 -- common/autotest_common.sh@945 -- # kill 122952 00:20:19.643 Received shutdown signal, test time was about 13.336718 seconds 00:20:19.643 00:20:19.643 Latency(us) 00:20:19.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.643 =================================================================================================================== 00:20:19.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.643 21:02:47 -- common/autotest_common.sh@950 -- # wait 122952 00:20:19.643 [2024-06-09 21:02:47.718128] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.902 [2024-06-09 21:02:47.869232] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.837 ************************************ 00:20:20.837 END TEST raid_rebuild_test_io 00:20:20.837 ************************************ 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:20.837 00:20:20.837 real 0m18.242s 00:20:20.837 user 0m27.805s 00:20:20.837 sys 0m1.830s 00:20:20.837 21:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.837 21:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:20.837 21:02:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:20.837 21:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:20.837 21:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:20.837 ************************************ 00:20:20.837 START TEST raid_rebuild_test_sb_io 00:20:20.837 ************************************ 00:20:20.837 21:02:48 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:20.837 21:02:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=123442 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:20.838 21:02:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 123442 /var/tmp/spdk-raid.sock 00:20:20.838 21:02:48 -- common/autotest_common.sh@819 -- # '[' -z 123442 ']' 00:20:20.838 21:02:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:20.838 21:02:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.838 21:02:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:20.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:20.838 21:02:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.838 21:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:20.838 [2024-06-09 21:02:48.995510] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:20.838 [2024-06-09 21:02:48.995867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123442 ] 00:20:20.838 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:20.838 Zero copy mechanism will not be used. 00:20:21.096 [2024-06-09 21:02:49.151822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.355 [2024-06-09 21:02:49.339433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.355 [2024-06-09 21:02:49.521632] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.922 21:02:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.922 21:02:49 -- common/autotest_common.sh@852 -- # return 0 00:20:21.922 21:02:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:21.922 21:02:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:21.922 21:02:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.181 BaseBdev1_malloc 00:20:22.181 21:02:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:22.440 [2024-06-09 21:02:50.447605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:22.440 [2024-06-09 21:02:50.449073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.440 [2024-06-09 21:02:50.449275] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:22.440 [2024-06-09 21:02:50.449464] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.440 [2024-06-09 21:02:50.452020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.440 [2024-06-09 21:02:50.452238] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.440 BaseBdev1 00:20:22.440 21:02:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:22.440 21:02:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:22.440 21:02:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:22.698 BaseBdev2_malloc 00:20:22.698 21:02:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:22.957 [2024-06-09 21:02:50.890335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:22.957 [2024-06-09 21:02:50.890609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.957 [2024-06-09 21:02:50.890790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:22.957 [2024-06-09 21:02:50.890988] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.957 [2024-06-09 21:02:50.893390] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.957 [2024-06-09 21:02:50.893640] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:22.957 BaseBdev2 00:20:22.957 21:02:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:23.216 spare_malloc 00:20:23.216 21:02:51 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:23.474 spare_delay 00:20:23.475 21:02:51 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:23.475 [2024-06-09 21:02:51.569487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:23.475 [2024-06-09 21:02:51.569817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.475 [2024-06-09 21:02:51.570032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:20:23.475 [2024-06-09 21:02:51.570243] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.475 [2024-06-09 21:02:51.572513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.475 [2024-06-09 21:02:51.572697] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:23.475 spare 00:20:23.475 21:02:51 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:23.733 [2024-06-09 21:02:51.773650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.733 [2024-06-09 21:02:51.775649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.733 [2024-06-09 21:02:51.775998] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:23.734 [2024-06-09 21:02:51.776130] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:23.734 [2024-06-09 21:02:51.776306] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:23.734 [2024-06-09 21:02:51.776807] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:23.734 [2024-06-09 21:02:51.776947] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:20:23.734 [2024-06-09 21:02:51.777203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.734 21:02:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.993 21:02:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.993 "name": "raid_bdev1", 00:20:23.993 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:23.993 "strip_size_kb": 0, 00:20:23.993 "state": "online", 00:20:23.993 "raid_level": "raid1", 00:20:23.993 "superblock": true, 00:20:23.993 "num_base_bdevs": 2, 00:20:23.993 "num_base_bdevs_discovered": 2, 00:20:23.993 "num_base_bdevs_operational": 2, 00:20:23.993 "base_bdevs_list": [ 00:20:23.993 { 00:20:23.993 "name": "BaseBdev1", 00:20:23.993 "uuid": "4dc3c405-c957-58a8-b934-da6179da50d7", 00:20:23.993 "is_configured": true, 00:20:23.993 "data_offset": 2048, 00:20:23.993 "data_size": 63488 00:20:23.993 }, 00:20:23.993 { 00:20:23.993 "name": "BaseBdev2", 00:20:23.993 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:23.993 "is_configured": true, 00:20:23.993 "data_offset": 2048, 00:20:23.993 "data_size": 63488 00:20:23.993 } 00:20:23.993 ] 00:20:23.993 }' 00:20:23.993 21:02:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.993 21:02:52 -- common/autotest_common.sh@10 -- # set +x 00:20:24.559 21:02:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:24.559 21:02:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:24.817 [2024-06-09 21:02:52.786121] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.817 21:02:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:24.817 21:02:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.817 21:02:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:24.817 21:02:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:24.817 21:02:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:25.076 21:02:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:25.076 21:02:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:25.076 [2024-06-09 21:02:53.100609] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:25.076 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:25.076 Zero copy mechanism will not be used. 00:20:25.076 Running I/O for 60 seconds... 00:20:25.076 [2024-06-09 21:02:53.235257] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:25.076 [2024-06-09 21:02:53.242255] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:20:25.335 21:02:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.336 "name": "raid_bdev1", 00:20:25.336 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:25.336 "strip_size_kb": 0, 00:20:25.336 "state": "online", 00:20:25.336 "raid_level": "raid1", 00:20:25.336 "superblock": true, 00:20:25.336 "num_base_bdevs": 2, 00:20:25.336 "num_base_bdevs_discovered": 1, 00:20:25.336 "num_base_bdevs_operational": 1, 00:20:25.336 "base_bdevs_list": [ 00:20:25.336 { 00:20:25.336 "name": null, 00:20:25.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.336 "is_configured": false, 00:20:25.336 "data_offset": 2048, 00:20:25.336 "data_size": 63488 00:20:25.336 }, 00:20:25.336 { 00:20:25.336 "name": "BaseBdev2", 00:20:25.336 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:25.336 "is_configured": true, 00:20:25.336 "data_offset": 2048, 00:20:25.336 "data_size": 63488 00:20:25.336 } 00:20:25.336 ] 00:20:25.336 }' 00:20:25.336 21:02:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.336 21:02:53 -- common/autotest_common.sh@10 -- # set +x 00:20:26.272 21:02:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:26.272 [2024-06-09 21:02:54.367809] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:26.272 [2024-06-09 21:02:54.368101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.272 21:02:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:26.272 [2024-06-09 21:02:54.423944] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:26.272 [2024-06-09 21:02:54.426155] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:26.530 [2024-06-09 21:02:54.534545] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:26.530 [2024-06-09 21:02:54.535183] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:26.788 [2024-06-09 21:02:54.744269] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:26.788 [2024-06-09 21:02:54.744650] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:27.047 [2024-06-09 21:02:55.074075] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:27.047 [2024-06-09 21:02:55.194594] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:27.305 21:02:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.305 21:02:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.305 21:02:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:27.305 21:02:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:27.305 21:02:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.305 21:02:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.305 21:02:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.563 21:02:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.563 "name": "raid_bdev1", 00:20:27.563 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:27.563 "strip_size_kb": 0, 00:20:27.563 "state": "online", 00:20:27.563 "raid_level": "raid1", 00:20:27.563 "superblock": true, 00:20:27.563 "num_base_bdevs": 2, 00:20:27.563 "num_base_bdevs_discovered": 2, 00:20:27.563 "num_base_bdevs_operational": 2, 00:20:27.563 "process": { 00:20:27.563 "type": "rebuild", 00:20:27.563 "target": "spare", 00:20:27.563 "progress": { 00:20:27.563 "blocks": 14336, 00:20:27.563 "percent": 22 00:20:27.563 } 00:20:27.563 }, 00:20:27.563 "base_bdevs_list": [ 00:20:27.563 { 00:20:27.563 "name": "spare", 00:20:27.563 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:27.563 "is_configured": true, 00:20:27.563 "data_offset": 2048, 00:20:27.563 "data_size": 63488 00:20:27.563 }, 00:20:27.563 { 00:20:27.563 "name": "BaseBdev2", 00:20:27.563 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:27.563 "is_configured": true, 00:20:27.563 "data_offset": 2048, 00:20:27.563 "data_size": 63488 00:20:27.563 } 00:20:27.563 ] 00:20:27.563 }' 00:20:27.563 21:02:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.563 [2024-06-09 21:02:55.695408] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:27.564 [2024-06-09 21:02:55.695857] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:27.564 21:02:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.564 21:02:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.822 21:02:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.822 21:02:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:28.079 [2024-06-09 21:02:56.003724] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.079 [2024-06-09 21:02:56.029274] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:28.079 [2024-06-09 21:02:56.131385] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.079 [2024-06-09 21:02:56.133653] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.079 [2024-06-09 21:02:56.159123] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.079 21:02:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.080 21:02:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.080 21:02:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.337 21:02:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.337 "name": "raid_bdev1", 00:20:28.337 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:28.337 "strip_size_kb": 0, 00:20:28.337 "state": "online", 00:20:28.337 "raid_level": "raid1", 00:20:28.337 "superblock": true, 00:20:28.337 "num_base_bdevs": 2, 00:20:28.337 "num_base_bdevs_discovered": 1, 00:20:28.337 "num_base_bdevs_operational": 1, 00:20:28.337 "base_bdevs_list": [ 00:20:28.337 { 00:20:28.337 "name": null, 00:20:28.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.337 "is_configured": false, 00:20:28.337 "data_offset": 2048, 00:20:28.337 "data_size": 63488 00:20:28.337 }, 00:20:28.337 { 00:20:28.337 "name": "BaseBdev2", 00:20:28.337 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:28.337 "is_configured": true, 00:20:28.337 "data_offset": 2048, 00:20:28.337 "data_size": 63488 00:20:28.337 } 00:20:28.337 ] 00:20:28.337 }' 00:20:28.337 21:02:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.337 21:02:56 -- common/autotest_common.sh@10 -- # set +x 00:20:28.930 21:02:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.930 21:02:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:28.930 21:02:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:28.930 21:02:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:28.930 21:02:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:28.930 21:02:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.930 21:02:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.189 21:02:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.189 "name": "raid_bdev1", 00:20:29.189 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:29.189 "strip_size_kb": 0, 00:20:29.189 "state": "online", 00:20:29.189 "raid_level": "raid1", 00:20:29.189 "superblock": true, 00:20:29.189 "num_base_bdevs": 2, 00:20:29.189 "num_base_bdevs_discovered": 1, 00:20:29.189 "num_base_bdevs_operational": 1, 00:20:29.189 "base_bdevs_list": [ 00:20:29.189 { 00:20:29.189 "name": null, 00:20:29.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.189 "is_configured": false, 00:20:29.189 "data_offset": 2048, 00:20:29.189 "data_size": 63488 00:20:29.189 }, 00:20:29.189 { 00:20:29.189 "name": "BaseBdev2", 00:20:29.189 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:29.189 "is_configured": true, 00:20:29.189 "data_offset": 2048, 00:20:29.189 "data_size": 63488 00:20:29.189 } 00:20:29.189 ] 00:20:29.189 }' 00:20:29.189 21:02:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.189 21:02:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:29.189 21:02:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.189 21:02:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:29.189 21:02:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.448 [2024-06-09 21:02:57.522261] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:29.448 [2024-06-09 21:02:57.522327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.448 [2024-06-09 21:02:57.557440] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:29.448 [2024-06-09 21:02:57.559533] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:29.448 21:02:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:29.706 [2024-06-09 21:02:57.680696] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:29.706 [2024-06-09 21:02:57.681152] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:29.706 [2024-06-09 21:02:57.813018] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:29.706 [2024-06-09 21:02:57.813174] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:30.273 [2024-06-09 21:02:58.263624] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:30.531 21:02:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.531 21:02:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.531 21:02:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.531 21:02:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.531 21:02:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.531 21:02:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.531 21:02:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.531 [2024-06-09 21:02:58.578299] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:30.531 [2024-06-09 21:02:58.578760] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.790 "name": "raid_bdev1", 00:20:30.790 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:30.790 "strip_size_kb": 0, 00:20:30.790 "state": "online", 00:20:30.790 "raid_level": "raid1", 00:20:30.790 "superblock": true, 00:20:30.790 "num_base_bdevs": 2, 00:20:30.790 "num_base_bdevs_discovered": 2, 00:20:30.790 "num_base_bdevs_operational": 2, 00:20:30.790 "process": { 00:20:30.790 "type": "rebuild", 00:20:30.790 "target": "spare", 00:20:30.790 "progress": { 00:20:30.790 "blocks": 14336, 00:20:30.790 "percent": 22 00:20:30.790 } 00:20:30.790 }, 00:20:30.790 "base_bdevs_list": [ 00:20:30.790 { 00:20:30.790 "name": "spare", 00:20:30.790 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:30.790 "is_configured": true, 00:20:30.790 "data_offset": 2048, 00:20:30.790 "data_size": 63488 00:20:30.790 }, 00:20:30.790 { 00:20:30.790 "name": "BaseBdev2", 00:20:30.790 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:30.790 "is_configured": true, 00:20:30.790 "data_offset": 2048, 00:20:30.790 "data_size": 63488 00:20:30.790 } 00:20:30.790 ] 00:20:30.790 }' 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.790 [2024-06-09 21:02:58.802348] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:30.790 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@657 -- # local timeout=450 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.790 21:02:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.049 21:02:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:31.049 "name": "raid_bdev1", 00:20:31.049 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:31.049 "strip_size_kb": 0, 00:20:31.049 "state": "online", 00:20:31.049 "raid_level": "raid1", 00:20:31.049 "superblock": true, 00:20:31.049 "num_base_bdevs": 2, 00:20:31.049 "num_base_bdevs_discovered": 2, 00:20:31.049 "num_base_bdevs_operational": 2, 00:20:31.049 "process": { 00:20:31.049 "type": "rebuild", 00:20:31.049 "target": "spare", 00:20:31.049 "progress": { 00:20:31.049 "blocks": 18432, 00:20:31.049 "percent": 29 00:20:31.049 } 00:20:31.049 }, 00:20:31.049 "base_bdevs_list": [ 00:20:31.049 { 00:20:31.049 "name": "spare", 00:20:31.049 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:31.049 "is_configured": true, 00:20:31.049 "data_offset": 2048, 00:20:31.049 "data_size": 63488 00:20:31.049 }, 00:20:31.049 { 00:20:31.049 "name": "BaseBdev2", 00:20:31.049 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:31.049 "is_configured": true, 00:20:31.049 "data_offset": 2048, 00:20:31.049 "data_size": 63488 00:20:31.049 } 00:20:31.049 ] 00:20:31.049 }' 00:20:31.049 21:02:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:31.049 21:02:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.049 21:02:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:31.308 [2024-06-09 21:02:59.227037] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:31.308 21:02:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.308 21:02:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:31.566 [2024-06-09 21:02:59.537155] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:31.825 [2024-06-09 21:02:59.878766] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.084 21:03:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.343 21:03:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.343 "name": "raid_bdev1", 00:20:32.343 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:32.343 "strip_size_kb": 0, 00:20:32.343 "state": "online", 00:20:32.343 "raid_level": "raid1", 00:20:32.343 "superblock": true, 00:20:32.343 "num_base_bdevs": 2, 00:20:32.343 "num_base_bdevs_discovered": 2, 00:20:32.343 "num_base_bdevs_operational": 2, 00:20:32.343 "process": { 00:20:32.343 "type": "rebuild", 00:20:32.343 "target": "spare", 00:20:32.343 "progress": { 00:20:32.343 "blocks": 43008, 00:20:32.343 "percent": 67 00:20:32.343 } 00:20:32.343 }, 00:20:32.343 "base_bdevs_list": [ 00:20:32.343 { 00:20:32.343 "name": "spare", 00:20:32.343 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:32.343 "is_configured": true, 00:20:32.343 "data_offset": 2048, 00:20:32.343 "data_size": 63488 00:20:32.343 }, 00:20:32.343 { 00:20:32.343 "name": "BaseBdev2", 00:20:32.343 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:32.343 "is_configured": true, 00:20:32.343 "data_offset": 2048, 00:20:32.343 "data_size": 63488 00:20:32.343 } 00:20:32.343 ] 00:20:32.343 }' 00:20:32.343 21:03:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.343 21:03:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.343 [2024-06-09 21:03:00.515697] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:32.343 21:03:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.343 [2024-06-09 21:03:00.516279] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:32.602 21:03:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.602 21:03:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:32.602 [2024-06-09 21:03:00.737016] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:33.169 [2024-06-09 21:03:01.067446] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:33.169 [2024-06-09 21:03:01.174634] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.428 21:03:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.687 21:03:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:33.687 "name": "raid_bdev1", 00:20:33.687 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:33.687 "strip_size_kb": 0, 00:20:33.687 "state": "online", 00:20:33.687 "raid_level": "raid1", 00:20:33.687 "superblock": true, 00:20:33.687 "num_base_bdevs": 2, 00:20:33.687 "num_base_bdevs_discovered": 2, 00:20:33.687 "num_base_bdevs_operational": 2, 00:20:33.687 "process": { 00:20:33.687 "type": "rebuild", 00:20:33.687 "target": "spare", 00:20:33.687 "progress": { 00:20:33.687 "blocks": 61440, 00:20:33.687 "percent": 96 00:20:33.687 } 00:20:33.687 }, 00:20:33.687 "base_bdevs_list": [ 00:20:33.687 { 00:20:33.687 "name": "spare", 00:20:33.687 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:33.687 "is_configured": true, 00:20:33.687 "data_offset": 2048, 00:20:33.687 "data_size": 63488 00:20:33.687 }, 00:20:33.687 { 00:20:33.687 "name": "BaseBdev2", 00:20:33.687 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:33.687 "is_configured": true, 00:20:33.687 "data_offset": 2048, 00:20:33.687 "data_size": 63488 00:20:33.687 } 00:20:33.687 ] 00:20:33.687 }' 00:20:33.687 21:03:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:33.688 [2024-06-09 21:03:01.830842] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:33.946 21:03:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.946 21:03:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:33.946 21:03:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.946 21:03:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:33.946 [2024-06-09 21:03:01.930814] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:33.946 [2024-06-09 21:03:01.939175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.879 21:03:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:35.136 "name": "raid_bdev1", 00:20:35.136 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:35.136 "strip_size_kb": 0, 00:20:35.136 "state": "online", 00:20:35.136 "raid_level": "raid1", 00:20:35.136 "superblock": true, 00:20:35.136 "num_base_bdevs": 2, 00:20:35.136 "num_base_bdevs_discovered": 2, 00:20:35.136 "num_base_bdevs_operational": 2, 00:20:35.136 "base_bdevs_list": [ 00:20:35.136 { 00:20:35.136 "name": "spare", 00:20:35.136 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:35.136 "is_configured": true, 00:20:35.136 "data_offset": 2048, 00:20:35.136 "data_size": 63488 00:20:35.136 }, 00:20:35.136 { 00:20:35.136 "name": "BaseBdev2", 00:20:35.136 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:35.136 "is_configured": true, 00:20:35.136 "data_offset": 2048, 00:20:35.136 "data_size": 63488 00:20:35.136 } 00:20:35.136 ] 00:20:35.136 }' 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@660 -- # break 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.136 21:03:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.394 21:03:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:35.394 "name": "raid_bdev1", 00:20:35.394 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:35.394 "strip_size_kb": 0, 00:20:35.394 "state": "online", 00:20:35.394 "raid_level": "raid1", 00:20:35.394 "superblock": true, 00:20:35.394 "num_base_bdevs": 2, 00:20:35.395 "num_base_bdevs_discovered": 2, 00:20:35.395 "num_base_bdevs_operational": 2, 00:20:35.395 "base_bdevs_list": [ 00:20:35.395 { 00:20:35.395 "name": "spare", 00:20:35.395 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:35.395 "is_configured": true, 00:20:35.395 "data_offset": 2048, 00:20:35.395 "data_size": 63488 00:20:35.395 }, 00:20:35.395 { 00:20:35.395 "name": "BaseBdev2", 00:20:35.395 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:35.395 "is_configured": true, 00:20:35.395 "data_offset": 2048, 00:20:35.395 "data_size": 63488 00:20:35.395 } 00:20:35.395 ] 00:20:35.395 }' 00:20:35.395 21:03:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:35.395 21:03:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:35.395 21:03:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:35.395 21:03:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:35.395 21:03:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:35.395 21:03:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:35.395 21:03:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.653 "name": "raid_bdev1", 00:20:35.653 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:35.653 "strip_size_kb": 0, 00:20:35.653 "state": "online", 00:20:35.653 "raid_level": "raid1", 00:20:35.653 "superblock": true, 00:20:35.653 "num_base_bdevs": 2, 00:20:35.653 "num_base_bdevs_discovered": 2, 00:20:35.653 "num_base_bdevs_operational": 2, 00:20:35.653 "base_bdevs_list": [ 00:20:35.653 { 00:20:35.653 "name": "spare", 00:20:35.653 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:35.653 "is_configured": true, 00:20:35.653 "data_offset": 2048, 00:20:35.653 "data_size": 63488 00:20:35.653 }, 00:20:35.653 { 00:20:35.653 "name": "BaseBdev2", 00:20:35.653 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:35.653 "is_configured": true, 00:20:35.653 "data_offset": 2048, 00:20:35.653 "data_size": 63488 00:20:35.653 } 00:20:35.653 ] 00:20:35.653 }' 00:20:35.653 21:03:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.653 21:03:03 -- common/autotest_common.sh@10 -- # set +x 00:20:36.220 21:03:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:36.478 [2024-06-09 21:03:04.595996] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:36.478 [2024-06-09 21:03:04.596057] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.478 00:20:36.478 Latency(us) 00:20:36.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.478 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:36.478 raid_bdev1 : 11.54 112.56 337.67 0.00 0.00 12338.54 284.86 114390.11 00:20:36.478 =================================================================================================================== 00:20:36.478 Total : 112.56 337.67 0.00 0.00 12338.54 284.86 114390.11 00:20:36.737 [2024-06-09 21:03:04.659600] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.737 [2024-06-09 21:03:04.659670] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.737 [2024-06-09 21:03:04.659756] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.737 [2024-06-09 21:03:04.659786] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:20:36.737 0 00:20:36.737 21:03:04 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.737 21:03:04 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:36.737 21:03:04 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:36.737 21:03:04 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:36.737 21:03:04 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@12 -- # local i 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:36.737 21:03:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:36.996 /dev/nbd0 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:36.996 21:03:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:36.996 21:03:05 -- common/autotest_common.sh@857 -- # local i 00:20:36.996 21:03:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:36.996 21:03:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:36.996 21:03:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:36.996 21:03:05 -- common/autotest_common.sh@861 -- # break 00:20:36.996 21:03:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:36.996 21:03:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:36.996 21:03:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:36.996 1+0 records in 00:20:36.996 1+0 records out 00:20:36.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243734 s, 16.8 MB/s 00:20:36.996 21:03:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.996 21:03:05 -- common/autotest_common.sh@874 -- # size=4096 00:20:36.996 21:03:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.996 21:03:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:36.996 21:03:05 -- common/autotest_common.sh@877 -- # return 0 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:36.996 21:03:05 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:36.996 21:03:05 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:36.996 21:03:05 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@12 -- # local i 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:36.996 21:03:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:37.255 /dev/nbd1 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:37.514 21:03:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:37.514 21:03:05 -- common/autotest_common.sh@857 -- # local i 00:20:37.514 21:03:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:37.514 21:03:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:37.514 21:03:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:37.514 21:03:05 -- common/autotest_common.sh@861 -- # break 00:20:37.514 21:03:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:37.514 21:03:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:37.514 21:03:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.514 1+0 records in 00:20:37.514 1+0 records out 00:20:37.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431386 s, 9.5 MB/s 00:20:37.514 21:03:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.514 21:03:05 -- common/autotest_common.sh@874 -- # size=4096 00:20:37.514 21:03:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.514 21:03:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:37.514 21:03:05 -- common/autotest_common.sh@877 -- # return 0 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:37.514 21:03:05 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:37.514 21:03:05 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@51 -- # local i 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.514 21:03:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@41 -- # break 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.773 21:03:05 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@51 -- # local i 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.773 21:03:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@41 -- # break 00:20:38.032 21:03:06 -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.032 21:03:06 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:38.032 21:03:06 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:38.032 21:03:06 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:38.032 21:03:06 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:38.290 21:03:06 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:38.549 [2024-06-09 21:03:06.592831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:38.549 [2024-06-09 21:03:06.592945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.549 [2024-06-09 21:03:06.592986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:38.549 [2024-06-09 21:03:06.593021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.549 [2024-06-09 21:03:06.595715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.549 [2024-06-09 21:03:06.595814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:38.549 [2024-06-09 21:03:06.595953] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:38.549 [2024-06-09 21:03:06.596057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.549 BaseBdev1 00:20:38.549 21:03:06 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:38.549 21:03:06 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:38.549 21:03:06 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:38.808 21:03:06 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:39.067 [2024-06-09 21:03:07.040438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:39.067 [2024-06-09 21:03:07.040533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.067 [2024-06-09 21:03:07.040572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:39.067 [2024-06-09 21:03:07.040604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.067 [2024-06-09 21:03:07.041125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.067 [2024-06-09 21:03:07.041203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:39.067 [2024-06-09 21:03:07.041345] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:39.067 [2024-06-09 21:03:07.041364] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:39.067 [2024-06-09 21:03:07.041373] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.067 [2024-06-09 21:03:07.041392] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:20:39.067 [2024-06-09 21:03:07.041483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.067 BaseBdev2 00:20:39.067 21:03:07 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:39.344 [2024-06-09 21:03:07.440575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:39.344 [2024-06-09 21:03:07.440663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.344 [2024-06-09 21:03:07.440707] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:39.344 [2024-06-09 21:03:07.440733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.344 [2024-06-09 21:03:07.441267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.344 [2024-06-09 21:03:07.441335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:39.344 [2024-06-09 21:03:07.441463] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:39.344 [2024-06-09 21:03:07.441493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.344 spare 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.344 21:03:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.616 [2024-06-09 21:03:07.541639] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:39.616 [2024-06-09 21:03:07.541667] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.616 [2024-06-09 21:03:07.541821] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:20:39.616 [2024-06-09 21:03:07.542305] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:39.616 [2024-06-09 21:03:07.542331] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:39.616 [2024-06-09 21:03:07.542506] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.616 21:03:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.616 "name": "raid_bdev1", 00:20:39.616 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:39.616 "strip_size_kb": 0, 00:20:39.616 "state": "online", 00:20:39.616 "raid_level": "raid1", 00:20:39.616 "superblock": true, 00:20:39.616 "num_base_bdevs": 2, 00:20:39.616 "num_base_bdevs_discovered": 2, 00:20:39.616 "num_base_bdevs_operational": 2, 00:20:39.616 "base_bdevs_list": [ 00:20:39.616 { 00:20:39.616 "name": "spare", 00:20:39.616 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:39.616 "is_configured": true, 00:20:39.616 "data_offset": 2048, 00:20:39.616 "data_size": 63488 00:20:39.616 }, 00:20:39.616 { 00:20:39.616 "name": "BaseBdev2", 00:20:39.616 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:39.616 "is_configured": true, 00:20:39.616 "data_offset": 2048, 00:20:39.616 "data_size": 63488 00:20:39.616 } 00:20:39.616 ] 00:20:39.616 }' 00:20:39.616 21:03:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.616 21:03:07 -- common/autotest_common.sh@10 -- # set +x 00:20:40.183 21:03:08 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.183 21:03:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:40.183 21:03:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:40.183 21:03:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:40.183 21:03:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:40.183 21:03:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.183 21:03:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.442 21:03:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:40.442 "name": "raid_bdev1", 00:20:40.442 "uuid": "55c8fef7-c17b-48d6-9453-057f7924d5ad", 00:20:40.442 "strip_size_kb": 0, 00:20:40.442 "state": "online", 00:20:40.442 "raid_level": "raid1", 00:20:40.442 "superblock": true, 00:20:40.442 "num_base_bdevs": 2, 00:20:40.442 "num_base_bdevs_discovered": 2, 00:20:40.442 "num_base_bdevs_operational": 2, 00:20:40.442 "base_bdevs_list": [ 00:20:40.442 { 00:20:40.442 "name": "spare", 00:20:40.442 "uuid": "cc2dd6de-a599-5f03-9f8d-32640b7a10e6", 00:20:40.442 "is_configured": true, 00:20:40.442 "data_offset": 2048, 00:20:40.442 "data_size": 63488 00:20:40.442 }, 00:20:40.442 { 00:20:40.442 "name": "BaseBdev2", 00:20:40.442 "uuid": "b73ffe5e-610e-588f-b320-725f1980c228", 00:20:40.442 "is_configured": true, 00:20:40.442 "data_offset": 2048, 00:20:40.442 "data_size": 63488 00:20:40.442 } 00:20:40.442 ] 00:20:40.442 }' 00:20:40.442 21:03:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:40.442 21:03:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:40.442 21:03:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:40.442 21:03:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:40.442 21:03:08 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.442 21:03:08 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:40.701 21:03:08 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.701 21:03:08 -- bdev/bdev_raid.sh@709 -- # killprocess 123442 00:20:40.701 21:03:08 -- common/autotest_common.sh@926 -- # '[' -z 123442 ']' 00:20:40.701 21:03:08 -- common/autotest_common.sh@930 -- # kill -0 123442 00:20:40.701 21:03:08 -- common/autotest_common.sh@931 -- # uname 00:20:40.701 21:03:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:40.701 21:03:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123442 00:20:40.701 21:03:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:40.701 killing process with pid 123442 00:20:40.701 Received shutdown signal, test time was about 15.723213 seconds 00:20:40.701 00:20:40.701 Latency(us) 00:20:40.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.701 =================================================================================================================== 00:20:40.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.701 21:03:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:40.701 21:03:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123442' 00:20:40.701 21:03:08 -- common/autotest_common.sh@945 -- # kill 123442 00:20:40.701 21:03:08 -- common/autotest_common.sh@950 -- # wait 123442 00:20:40.701 [2024-06-09 21:03:08.826174] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:40.701 [2024-06-09 21:03:08.826273] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.701 [2024-06-09 21:03:08.826349] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.701 [2024-06-09 21:03:08.826363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:20:40.960 [2024-06-09 21:03:08.984141] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:41.895 21:03:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:41.895 00:20:41.895 real 0m21.058s 00:20:41.895 user 0m33.417s 00:20:41.895 sys 0m2.214s 00:20:41.895 21:03:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:41.895 21:03:09 -- common/autotest_common.sh@10 -- # set +x 00:20:41.895 ************************************ 00:20:41.895 END TEST raid_rebuild_test_sb_io 00:20:41.895 ************************************ 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:20:41.895 21:03:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:41.895 21:03:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:41.895 21:03:10 -- common/autotest_common.sh@10 -- # set +x 00:20:41.895 ************************************ 00:20:41.895 START TEST raid_rebuild_test 00:20:41.895 ************************************ 00:20:41.895 21:03:10 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@544 -- # raid_pid=124010 00:20:41.895 21:03:10 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124010 /var/tmp/spdk-raid.sock 00:20:41.895 21:03:10 -- common/autotest_common.sh@819 -- # '[' -z 124010 ']' 00:20:41.896 21:03:10 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:41.896 21:03:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:41.896 21:03:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:41.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:41.896 21:03:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:41.896 21:03:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:41.896 21:03:10 -- common/autotest_common.sh@10 -- # set +x 00:20:42.154 [2024-06-09 21:03:10.115428] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:42.154 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:42.154 Zero copy mechanism will not be used. 00:20:42.154 [2024-06-09 21:03:10.115642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124010 ] 00:20:42.154 [2024-06-09 21:03:10.275323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.413 [2024-06-09 21:03:10.446744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.671 [2024-06-09 21:03:10.614786] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.929 21:03:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:42.929 21:03:11 -- common/autotest_common.sh@852 -- # return 0 00:20:42.929 21:03:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:42.929 21:03:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:42.929 21:03:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:43.187 BaseBdev1 00:20:43.187 21:03:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:43.187 21:03:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:43.188 21:03:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.446 BaseBdev2 00:20:43.446 21:03:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:43.446 21:03:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:43.446 21:03:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:43.705 BaseBdev3 00:20:43.705 21:03:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:43.705 21:03:11 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:43.705 21:03:11 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:43.963 BaseBdev4 00:20:43.963 21:03:12 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:44.221 spare_malloc 00:20:44.221 21:03:12 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:44.479 spare_delay 00:20:44.479 21:03:12 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:44.737 [2024-06-09 21:03:12.661367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:44.737 [2024-06-09 21:03:12.661532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.737 [2024-06-09 21:03:12.661590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:20:44.737 [2024-06-09 21:03:12.661645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.737 [2024-06-09 21:03:12.664231] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.737 [2024-06-09 21:03:12.664286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:44.737 spare 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:44.737 [2024-06-09 21:03:12.865329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.737 [2024-06-09 21:03:12.867317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.737 [2024-06-09 21:03:12.867370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:44.737 [2024-06-09 21:03:12.867408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:44.737 [2024-06-09 21:03:12.867509] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:44.737 [2024-06-09 21:03:12.867522] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:44.737 [2024-06-09 21:03:12.867650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:44.737 [2024-06-09 21:03:12.868033] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:44.737 [2024-06-09 21:03:12.868048] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:20:44.737 [2024-06-09 21:03:12.868197] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.737 21:03:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.995 21:03:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.995 "name": "raid_bdev1", 00:20:44.995 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:44.995 "strip_size_kb": 0, 00:20:44.995 "state": "online", 00:20:44.995 "raid_level": "raid1", 00:20:44.995 "superblock": false, 00:20:44.995 "num_base_bdevs": 4, 00:20:44.995 "num_base_bdevs_discovered": 4, 00:20:44.995 "num_base_bdevs_operational": 4, 00:20:44.995 "base_bdevs_list": [ 00:20:44.995 { 00:20:44.995 "name": "BaseBdev1", 00:20:44.995 "uuid": "00fd98e1-3ec7-4d7d-a0d0-170abd7183b6", 00:20:44.995 "is_configured": true, 00:20:44.995 "data_offset": 0, 00:20:44.995 "data_size": 65536 00:20:44.995 }, 00:20:44.995 { 00:20:44.995 "name": "BaseBdev2", 00:20:44.995 "uuid": "1bcd8269-b9b1-4bb9-a7e8-50c55de65791", 00:20:44.995 "is_configured": true, 00:20:44.995 "data_offset": 0, 00:20:44.995 "data_size": 65536 00:20:44.995 }, 00:20:44.995 { 00:20:44.995 "name": "BaseBdev3", 00:20:44.995 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:44.995 "is_configured": true, 00:20:44.995 "data_offset": 0, 00:20:44.995 "data_size": 65536 00:20:44.995 }, 00:20:44.995 { 00:20:44.995 "name": "BaseBdev4", 00:20:44.995 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:44.995 "is_configured": true, 00:20:44.995 "data_offset": 0, 00:20:44.995 "data_size": 65536 00:20:44.995 } 00:20:44.995 ] 00:20:44.995 }' 00:20:44.995 21:03:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.995 21:03:13 -- common/autotest_common.sh@10 -- # set +x 00:20:45.561 21:03:13 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:45.561 21:03:13 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:45.819 [2024-06-09 21:03:13.917802] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.819 21:03:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:45.819 21:03:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.819 21:03:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:46.078 21:03:14 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:46.078 21:03:14 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:46.078 21:03:14 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:46.078 21:03:14 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@12 -- # local i 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.078 21:03:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:46.336 [2024-06-09 21:03:14.281569] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:46.336 /dev/nbd0 00:20:46.336 21:03:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:46.336 21:03:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:46.336 21:03:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:46.336 21:03:14 -- common/autotest_common.sh@857 -- # local i 00:20:46.336 21:03:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:46.336 21:03:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:46.336 21:03:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:46.336 21:03:14 -- common/autotest_common.sh@861 -- # break 00:20:46.336 21:03:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:46.336 21:03:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:46.336 21:03:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:46.336 1+0 records in 00:20:46.336 1+0 records out 00:20:46.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229284 s, 17.9 MB/s 00:20:46.336 21:03:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.336 21:03:14 -- common/autotest_common.sh@874 -- # size=4096 00:20:46.336 21:03:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.336 21:03:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:46.336 21:03:14 -- common/autotest_common.sh@877 -- # return 0 00:20:46.336 21:03:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.336 21:03:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.336 21:03:14 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:46.336 21:03:14 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:46.336 21:03:14 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:51.605 65536+0 records in 00:20:51.605 65536+0 records out 00:20:51.605 33554432 bytes (34 MB, 32 MiB) copied, 5.2441 s, 6.4 MB/s 00:20:51.605 21:03:19 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:51.605 21:03:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:51.605 21:03:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:51.605 21:03:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.605 21:03:19 -- bdev/nbd_common.sh@51 -- # local i 00:20:51.605 21:03:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.605 21:03:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:51.878 [2024-06-09 21:03:19.830757] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@41 -- # break 00:20:51.878 21:03:19 -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.878 21:03:19 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:51.878 [2024-06-09 21:03:20.014413] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.878 21:03:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.136 21:03:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.136 "name": "raid_bdev1", 00:20:52.136 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:52.136 "strip_size_kb": 0, 00:20:52.136 "state": "online", 00:20:52.136 "raid_level": "raid1", 00:20:52.136 "superblock": false, 00:20:52.136 "num_base_bdevs": 4, 00:20:52.136 "num_base_bdevs_discovered": 3, 00:20:52.136 "num_base_bdevs_operational": 3, 00:20:52.136 "base_bdevs_list": [ 00:20:52.136 { 00:20:52.136 "name": null, 00:20:52.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.136 "is_configured": false, 00:20:52.136 "data_offset": 0, 00:20:52.136 "data_size": 65536 00:20:52.136 }, 00:20:52.136 { 00:20:52.136 "name": "BaseBdev2", 00:20:52.136 "uuid": "1bcd8269-b9b1-4bb9-a7e8-50c55de65791", 00:20:52.136 "is_configured": true, 00:20:52.136 "data_offset": 0, 00:20:52.136 "data_size": 65536 00:20:52.136 }, 00:20:52.136 { 00:20:52.136 "name": "BaseBdev3", 00:20:52.136 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:52.136 "is_configured": true, 00:20:52.136 "data_offset": 0, 00:20:52.136 "data_size": 65536 00:20:52.136 }, 00:20:52.136 { 00:20:52.136 "name": "BaseBdev4", 00:20:52.136 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:52.136 "is_configured": true, 00:20:52.136 "data_offset": 0, 00:20:52.136 "data_size": 65536 00:20:52.136 } 00:20:52.136 ] 00:20:52.136 }' 00:20:52.136 21:03:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.136 21:03:20 -- common/autotest_common.sh@10 -- # set +x 00:20:53.071 21:03:20 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:53.071 [2024-06-09 21:03:21.074619] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:53.071 [2024-06-09 21:03:21.074657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:53.071 [2024-06-09 21:03:21.085423] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:20:53.071 [2024-06-09 21:03:21.087427] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:53.071 21:03:21 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:54.005 21:03:22 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.005 21:03:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:54.005 21:03:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:54.005 21:03:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:54.005 21:03:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:54.005 21:03:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.005 21:03:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.262 21:03:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.262 "name": "raid_bdev1", 00:20:54.262 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:54.262 "strip_size_kb": 0, 00:20:54.262 "state": "online", 00:20:54.262 "raid_level": "raid1", 00:20:54.262 "superblock": false, 00:20:54.262 "num_base_bdevs": 4, 00:20:54.262 "num_base_bdevs_discovered": 4, 00:20:54.262 "num_base_bdevs_operational": 4, 00:20:54.262 "process": { 00:20:54.262 "type": "rebuild", 00:20:54.262 "target": "spare", 00:20:54.262 "progress": { 00:20:54.262 "blocks": 24576, 00:20:54.262 "percent": 37 00:20:54.262 } 00:20:54.262 }, 00:20:54.262 "base_bdevs_list": [ 00:20:54.262 { 00:20:54.262 "name": "spare", 00:20:54.262 "uuid": "93f2eb86-1ba2-5fac-84b1-f2174a68b2d6", 00:20:54.262 "is_configured": true, 00:20:54.262 "data_offset": 0, 00:20:54.262 "data_size": 65536 00:20:54.262 }, 00:20:54.262 { 00:20:54.262 "name": "BaseBdev2", 00:20:54.262 "uuid": "1bcd8269-b9b1-4bb9-a7e8-50c55de65791", 00:20:54.262 "is_configured": true, 00:20:54.262 "data_offset": 0, 00:20:54.262 "data_size": 65536 00:20:54.262 }, 00:20:54.262 { 00:20:54.262 "name": "BaseBdev3", 00:20:54.262 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:54.262 "is_configured": true, 00:20:54.262 "data_offset": 0, 00:20:54.262 "data_size": 65536 00:20:54.262 }, 00:20:54.262 { 00:20:54.262 "name": "BaseBdev4", 00:20:54.262 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:54.262 "is_configured": true, 00:20:54.262 "data_offset": 0, 00:20:54.262 "data_size": 65536 00:20:54.262 } 00:20:54.262 ] 00:20:54.262 }' 00:20:54.262 21:03:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.262 21:03:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.262 21:03:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.521 21:03:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.521 21:03:22 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:54.521 [2024-06-09 21:03:22.674025] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:54.521 [2024-06-09 21:03:22.697314] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:54.521 [2024-06-09 21:03:22.697435] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.780 21:03:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.038 21:03:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.038 "name": "raid_bdev1", 00:20:55.038 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:55.038 "strip_size_kb": 0, 00:20:55.038 "state": "online", 00:20:55.038 "raid_level": "raid1", 00:20:55.038 "superblock": false, 00:20:55.038 "num_base_bdevs": 4, 00:20:55.038 "num_base_bdevs_discovered": 3, 00:20:55.038 "num_base_bdevs_operational": 3, 00:20:55.038 "base_bdevs_list": [ 00:20:55.038 { 00:20:55.038 "name": null, 00:20:55.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.038 "is_configured": false, 00:20:55.038 "data_offset": 0, 00:20:55.038 "data_size": 65536 00:20:55.038 }, 00:20:55.038 { 00:20:55.038 "name": "BaseBdev2", 00:20:55.038 "uuid": "1bcd8269-b9b1-4bb9-a7e8-50c55de65791", 00:20:55.038 "is_configured": true, 00:20:55.038 "data_offset": 0, 00:20:55.038 "data_size": 65536 00:20:55.038 }, 00:20:55.038 { 00:20:55.038 "name": "BaseBdev3", 00:20:55.038 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:55.038 "is_configured": true, 00:20:55.038 "data_offset": 0, 00:20:55.038 "data_size": 65536 00:20:55.038 }, 00:20:55.038 { 00:20:55.038 "name": "BaseBdev4", 00:20:55.038 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:55.038 "is_configured": true, 00:20:55.038 "data_offset": 0, 00:20:55.038 "data_size": 65536 00:20:55.038 } 00:20:55.038 ] 00:20:55.038 }' 00:20:55.038 21:03:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.038 21:03:22 -- common/autotest_common.sh@10 -- # set +x 00:20:55.605 21:03:23 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.605 21:03:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.605 21:03:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:55.605 21:03:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:55.605 21:03:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.605 21:03:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.605 21:03:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.864 21:03:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.864 "name": "raid_bdev1", 00:20:55.864 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:55.864 "strip_size_kb": 0, 00:20:55.864 "state": "online", 00:20:55.864 "raid_level": "raid1", 00:20:55.864 "superblock": false, 00:20:55.864 "num_base_bdevs": 4, 00:20:55.864 "num_base_bdevs_discovered": 3, 00:20:55.864 "num_base_bdevs_operational": 3, 00:20:55.864 "base_bdevs_list": [ 00:20:55.864 { 00:20:55.864 "name": null, 00:20:55.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.864 "is_configured": false, 00:20:55.864 "data_offset": 0, 00:20:55.864 "data_size": 65536 00:20:55.864 }, 00:20:55.864 { 00:20:55.864 "name": "BaseBdev2", 00:20:55.864 "uuid": "1bcd8269-b9b1-4bb9-a7e8-50c55de65791", 00:20:55.864 "is_configured": true, 00:20:55.864 "data_offset": 0, 00:20:55.864 "data_size": 65536 00:20:55.864 }, 00:20:55.864 { 00:20:55.864 "name": "BaseBdev3", 00:20:55.864 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:55.864 "is_configured": true, 00:20:55.864 "data_offset": 0, 00:20:55.864 "data_size": 65536 00:20:55.864 }, 00:20:55.864 { 00:20:55.864 "name": "BaseBdev4", 00:20:55.864 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:55.864 "is_configured": true, 00:20:55.864 "data_offset": 0, 00:20:55.864 "data_size": 65536 00:20:55.864 } 00:20:55.864 ] 00:20:55.864 }' 00:20:55.864 21:03:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.864 21:03:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:55.864 21:03:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.864 21:03:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:55.864 21:03:23 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:56.122 [2024-06-09 21:03:24.133624] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:56.122 [2024-06-09 21:03:24.133676] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.122 [2024-06-09 21:03:24.144457] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:20:56.122 [2024-06-09 21:03:24.146519] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:56.122 21:03:24 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:57.056 21:03:25 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.056 21:03:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:57.056 21:03:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:57.056 21:03:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:57.056 21:03:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:57.056 21:03:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.056 21:03:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.314 21:03:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.314 "name": "raid_bdev1", 00:20:57.314 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:57.314 "strip_size_kb": 0, 00:20:57.314 "state": "online", 00:20:57.314 "raid_level": "raid1", 00:20:57.314 "superblock": false, 00:20:57.314 "num_base_bdevs": 4, 00:20:57.314 "num_base_bdevs_discovered": 4, 00:20:57.314 "num_base_bdevs_operational": 4, 00:20:57.314 "process": { 00:20:57.314 "type": "rebuild", 00:20:57.314 "target": "spare", 00:20:57.314 "progress": { 00:20:57.314 "blocks": 24576, 00:20:57.314 "percent": 37 00:20:57.314 } 00:20:57.314 }, 00:20:57.314 "base_bdevs_list": [ 00:20:57.314 { 00:20:57.314 "name": "spare", 00:20:57.314 "uuid": "93f2eb86-1ba2-5fac-84b1-f2174a68b2d6", 00:20:57.314 "is_configured": true, 00:20:57.314 "data_offset": 0, 00:20:57.314 "data_size": 65536 00:20:57.314 }, 00:20:57.314 { 00:20:57.314 "name": "BaseBdev2", 00:20:57.314 "uuid": "1bcd8269-b9b1-4bb9-a7e8-50c55de65791", 00:20:57.314 "is_configured": true, 00:20:57.314 "data_offset": 0, 00:20:57.314 "data_size": 65536 00:20:57.314 }, 00:20:57.314 { 00:20:57.314 "name": "BaseBdev3", 00:20:57.314 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:57.314 "is_configured": true, 00:20:57.314 "data_offset": 0, 00:20:57.314 "data_size": 65536 00:20:57.314 }, 00:20:57.314 { 00:20:57.314 "name": "BaseBdev4", 00:20:57.314 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:57.314 "is_configured": true, 00:20:57.314 "data_offset": 0, 00:20:57.314 "data_size": 65536 00:20:57.314 } 00:20:57.314 ] 00:20:57.314 }' 00:20:57.314 21:03:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.314 21:03:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.314 21:03:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.572 21:03:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.572 21:03:25 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:57.572 21:03:25 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:57.572 21:03:25 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:57.572 21:03:25 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:57.572 21:03:25 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:57.572 [2024-06-09 21:03:25.697185] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:57.830 [2024-06-09 21:03:25.756741] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.830 "name": "raid_bdev1", 00:20:57.830 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:57.830 "strip_size_kb": 0, 00:20:57.830 "state": "online", 00:20:57.830 "raid_level": "raid1", 00:20:57.830 "superblock": false, 00:20:57.830 "num_base_bdevs": 4, 00:20:57.830 "num_base_bdevs_discovered": 3, 00:20:57.830 "num_base_bdevs_operational": 3, 00:20:57.830 "process": { 00:20:57.830 "type": "rebuild", 00:20:57.830 "target": "spare", 00:20:57.830 "progress": { 00:20:57.830 "blocks": 34816, 00:20:57.830 "percent": 53 00:20:57.830 } 00:20:57.830 }, 00:20:57.830 "base_bdevs_list": [ 00:20:57.830 { 00:20:57.830 "name": "spare", 00:20:57.830 "uuid": "93f2eb86-1ba2-5fac-84b1-f2174a68b2d6", 00:20:57.830 "is_configured": true, 00:20:57.830 "data_offset": 0, 00:20:57.830 "data_size": 65536 00:20:57.830 }, 00:20:57.830 { 00:20:57.830 "name": null, 00:20:57.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.830 "is_configured": false, 00:20:57.830 "data_offset": 0, 00:20:57.830 "data_size": 65536 00:20:57.830 }, 00:20:57.830 { 00:20:57.830 "name": "BaseBdev3", 00:20:57.830 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:57.830 "is_configured": true, 00:20:57.830 "data_offset": 0, 00:20:57.830 "data_size": 65536 00:20:57.830 }, 00:20:57.830 { 00:20:57.830 "name": "BaseBdev4", 00:20:57.830 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:57.830 "is_configured": true, 00:20:57.830 "data_offset": 0, 00:20:57.830 "data_size": 65536 00:20:57.830 } 00:20:57.830 ] 00:20:57.830 }' 00:20:57.830 21:03:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@657 -- # local timeout=478 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.089 21:03:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.347 21:03:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:58.347 "name": "raid_bdev1", 00:20:58.347 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:58.347 "strip_size_kb": 0, 00:20:58.347 "state": "online", 00:20:58.347 "raid_level": "raid1", 00:20:58.347 "superblock": false, 00:20:58.347 "num_base_bdevs": 4, 00:20:58.347 "num_base_bdevs_discovered": 3, 00:20:58.347 "num_base_bdevs_operational": 3, 00:20:58.347 "process": { 00:20:58.347 "type": "rebuild", 00:20:58.347 "target": "spare", 00:20:58.347 "progress": { 00:20:58.347 "blocks": 43008, 00:20:58.347 "percent": 65 00:20:58.347 } 00:20:58.347 }, 00:20:58.347 "base_bdevs_list": [ 00:20:58.347 { 00:20:58.347 "name": "spare", 00:20:58.347 "uuid": "93f2eb86-1ba2-5fac-84b1-f2174a68b2d6", 00:20:58.347 "is_configured": true, 00:20:58.347 "data_offset": 0, 00:20:58.347 "data_size": 65536 00:20:58.347 }, 00:20:58.347 { 00:20:58.347 "name": null, 00:20:58.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.347 "is_configured": false, 00:20:58.347 "data_offset": 0, 00:20:58.347 "data_size": 65536 00:20:58.347 }, 00:20:58.347 { 00:20:58.347 "name": "BaseBdev3", 00:20:58.347 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:58.347 "is_configured": true, 00:20:58.347 "data_offset": 0, 00:20:58.347 "data_size": 65536 00:20:58.347 }, 00:20:58.347 { 00:20:58.347 "name": "BaseBdev4", 00:20:58.347 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:58.347 "is_configured": true, 00:20:58.347 "data_offset": 0, 00:20:58.347 "data_size": 65536 00:20:58.347 } 00:20:58.347 ] 00:20:58.347 }' 00:20:58.347 21:03:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:58.347 21:03:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.347 21:03:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:58.347 21:03:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.347 21:03:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:59.282 [2024-06-09 21:03:27.366003] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:59.282 [2024-06-09 21:03:27.366102] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:59.282 [2024-06-09 21:03:27.366203] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.282 21:03:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.540 21:03:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.540 "name": "raid_bdev1", 00:20:59.540 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:20:59.540 "strip_size_kb": 0, 00:20:59.540 "state": "online", 00:20:59.540 "raid_level": "raid1", 00:20:59.540 "superblock": false, 00:20:59.540 "num_base_bdevs": 4, 00:20:59.540 "num_base_bdevs_discovered": 3, 00:20:59.540 "num_base_bdevs_operational": 3, 00:20:59.540 "base_bdevs_list": [ 00:20:59.540 { 00:20:59.540 "name": "spare", 00:20:59.540 "uuid": "93f2eb86-1ba2-5fac-84b1-f2174a68b2d6", 00:20:59.540 "is_configured": true, 00:20:59.540 "data_offset": 0, 00:20:59.540 "data_size": 65536 00:20:59.540 }, 00:20:59.540 { 00:20:59.540 "name": null, 00:20:59.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.540 "is_configured": false, 00:20:59.540 "data_offset": 0, 00:20:59.540 "data_size": 65536 00:20:59.540 }, 00:20:59.540 { 00:20:59.540 "name": "BaseBdev3", 00:20:59.540 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:20:59.540 "is_configured": true, 00:20:59.540 "data_offset": 0, 00:20:59.540 "data_size": 65536 00:20:59.540 }, 00:20:59.540 { 00:20:59.540 "name": "BaseBdev4", 00:20:59.540 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:20:59.540 "is_configured": true, 00:20:59.540 "data_offset": 0, 00:20:59.540 "data_size": 65536 00:20:59.540 } 00:20:59.540 ] 00:20:59.540 }' 00:20:59.540 21:03:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@660 -- # break 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.798 21:03:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.056 21:03:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:00.056 "name": "raid_bdev1", 00:21:00.056 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:21:00.056 "strip_size_kb": 0, 00:21:00.056 "state": "online", 00:21:00.056 "raid_level": "raid1", 00:21:00.056 "superblock": false, 00:21:00.056 "num_base_bdevs": 4, 00:21:00.056 "num_base_bdevs_discovered": 3, 00:21:00.056 "num_base_bdevs_operational": 3, 00:21:00.056 "base_bdevs_list": [ 00:21:00.056 { 00:21:00.056 "name": "spare", 00:21:00.056 "uuid": "93f2eb86-1ba2-5fac-84b1-f2174a68b2d6", 00:21:00.056 "is_configured": true, 00:21:00.056 "data_offset": 0, 00:21:00.056 "data_size": 65536 00:21:00.056 }, 00:21:00.056 { 00:21:00.056 "name": null, 00:21:00.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.056 "is_configured": false, 00:21:00.056 "data_offset": 0, 00:21:00.056 "data_size": 65536 00:21:00.056 }, 00:21:00.056 { 00:21:00.056 "name": "BaseBdev3", 00:21:00.056 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:21:00.056 "is_configured": true, 00:21:00.056 "data_offset": 0, 00:21:00.056 "data_size": 65536 00:21:00.056 }, 00:21:00.056 { 00:21:00.056 "name": "BaseBdev4", 00:21:00.056 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:21:00.056 "is_configured": true, 00:21:00.056 "data_offset": 0, 00:21:00.056 "data_size": 65536 00:21:00.056 } 00:21:00.056 ] 00:21:00.056 }' 00:21:00.056 21:03:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.056 21:03:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.057 21:03:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.057 21:03:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.057 21:03:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.315 21:03:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.315 "name": "raid_bdev1", 00:21:00.315 "uuid": "58812870-2a1f-43cb-bda1-e7859753735c", 00:21:00.315 "strip_size_kb": 0, 00:21:00.315 "state": "online", 00:21:00.315 "raid_level": "raid1", 00:21:00.315 "superblock": false, 00:21:00.315 "num_base_bdevs": 4, 00:21:00.315 "num_base_bdevs_discovered": 3, 00:21:00.315 "num_base_bdevs_operational": 3, 00:21:00.315 "base_bdevs_list": [ 00:21:00.315 { 00:21:00.315 "name": "spare", 00:21:00.315 "uuid": "93f2eb86-1ba2-5fac-84b1-f2174a68b2d6", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 0, 00:21:00.315 "data_size": 65536 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": null, 00:21:00.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.315 "is_configured": false, 00:21:00.315 "data_offset": 0, 00:21:00.315 "data_size": 65536 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": "BaseBdev3", 00:21:00.315 "uuid": "09d32796-b2a8-4172-a634-13ba025b7748", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 0, 00:21:00.315 "data_size": 65536 00:21:00.315 }, 00:21:00.315 { 00:21:00.315 "name": "BaseBdev4", 00:21:00.315 "uuid": "e7a488f5-eb67-4a64-a545-ac945b70b57c", 00:21:00.315 "is_configured": true, 00:21:00.315 "data_offset": 0, 00:21:00.315 "data_size": 65536 00:21:00.315 } 00:21:00.315 ] 00:21:00.315 }' 00:21:00.315 21:03:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.315 21:03:28 -- common/autotest_common.sh@10 -- # set +x 00:21:00.881 21:03:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:01.140 [2024-06-09 21:03:29.120094] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.140 [2024-06-09 21:03:29.120146] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.140 [2024-06-09 21:03:29.120259] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.140 [2024-06-09 21:03:29.120352] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.140 [2024-06-09 21:03:29.120366] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:01.140 21:03:29 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.140 21:03:29 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:01.399 21:03:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:01.399 21:03:29 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:01.399 21:03:29 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@12 -- # local i 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.399 21:03:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:01.658 /dev/nbd0 00:21:01.658 21:03:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:01.658 21:03:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:01.658 21:03:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:01.658 21:03:29 -- common/autotest_common.sh@857 -- # local i 00:21:01.658 21:03:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:01.658 21:03:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:01.658 21:03:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:01.658 21:03:29 -- common/autotest_common.sh@861 -- # break 00:21:01.658 21:03:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:01.658 21:03:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:01.658 21:03:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.658 1+0 records in 00:21:01.658 1+0 records out 00:21:01.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791606 s, 5.2 MB/s 00:21:01.658 21:03:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.658 21:03:29 -- common/autotest_common.sh@874 -- # size=4096 00:21:01.658 21:03:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.658 21:03:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:01.658 21:03:29 -- common/autotest_common.sh@877 -- # return 0 00:21:01.658 21:03:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.658 21:03:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.658 21:03:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:01.916 /dev/nbd1 00:21:01.916 21:03:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:01.916 21:03:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:01.916 21:03:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:01.916 21:03:29 -- common/autotest_common.sh@857 -- # local i 00:21:01.916 21:03:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:01.916 21:03:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:01.916 21:03:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:01.916 21:03:29 -- common/autotest_common.sh@861 -- # break 00:21:01.916 21:03:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:01.916 21:03:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:01.916 21:03:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.916 1+0 records in 00:21:01.916 1+0 records out 00:21:01.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506138 s, 8.1 MB/s 00:21:01.916 21:03:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.916 21:03:29 -- common/autotest_common.sh@874 -- # size=4096 00:21:01.916 21:03:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.916 21:03:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:01.916 21:03:29 -- common/autotest_common.sh@877 -- # return 0 00:21:01.916 21:03:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.916 21:03:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.916 21:03:29 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:02.175 21:03:30 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:02.175 21:03:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:02.175 21:03:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:02.175 21:03:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:02.175 21:03:30 -- bdev/nbd_common.sh@51 -- # local i 00:21:02.175 21:03:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.175 21:03:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@41 -- # break 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@41 -- # break 00:21:02.434 21:03:30 -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.434 21:03:30 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:02.434 21:03:30 -- bdev/bdev_raid.sh@709 -- # killprocess 124010 00:21:02.434 21:03:30 -- common/autotest_common.sh@926 -- # '[' -z 124010 ']' 00:21:02.434 21:03:30 -- common/autotest_common.sh@930 -- # kill -0 124010 00:21:02.434 21:03:30 -- common/autotest_common.sh@931 -- # uname 00:21:02.434 21:03:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:02.434 21:03:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124010 00:21:02.692 21:03:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:02.692 21:03:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:02.692 21:03:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124010' 00:21:02.692 killing process with pid 124010 00:21:02.692 21:03:30 -- common/autotest_common.sh@945 -- # kill 124010 00:21:02.692 Received shutdown signal, test time was about 60.000000 seconds 00:21:02.692 00:21:02.692 Latency(us) 00:21:02.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.692 =================================================================================================================== 00:21:02.692 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.692 [2024-06-09 21:03:30.618905] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:02.692 21:03:30 -- common/autotest_common.sh@950 -- # wait 124010 00:21:02.951 [2024-06-09 21:03:30.940740] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:03.888 00:21:03.888 real 0m21.812s 00:21:03.888 user 0m30.009s 00:21:03.888 sys 0m3.499s 00:21:03.888 21:03:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.888 21:03:31 -- common/autotest_common.sh@10 -- # set +x 00:21:03.888 ************************************ 00:21:03.888 END TEST raid_rebuild_test 00:21:03.888 ************************************ 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:03.888 21:03:31 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:03.888 21:03:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:03.888 21:03:31 -- common/autotest_common.sh@10 -- # set +x 00:21:03.888 ************************************ 00:21:03.888 START TEST raid_rebuild_test_sb 00:21:03.888 ************************************ 00:21:03.888 21:03:31 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@544 -- # raid_pid=124547 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124547 /var/tmp/spdk-raid.sock 00:21:03.888 21:03:31 -- common/autotest_common.sh@819 -- # '[' -z 124547 ']' 00:21:03.888 21:03:31 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:03.888 21:03:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:03.888 21:03:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.888 21:03:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:03.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:03.888 21:03:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.888 21:03:31 -- common/autotest_common.sh@10 -- # set +x 00:21:03.888 [2024-06-09 21:03:31.975769] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:03.888 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:03.888 Zero copy mechanism will not be used. 00:21:03.888 [2024-06-09 21:03:31.975944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124547 ] 00:21:04.195 [2024-06-09 21:03:32.129479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.195 [2024-06-09 21:03:32.317546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.452 [2024-06-09 21:03:32.482548] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.018 21:03:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:05.018 21:03:32 -- common/autotest_common.sh@852 -- # return 0 00:21:05.018 21:03:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:05.018 21:03:32 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:05.018 21:03:32 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:05.018 BaseBdev1_malloc 00:21:05.018 21:03:33 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:05.276 [2024-06-09 21:03:33.373696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:05.276 [2024-06-09 21:03:33.373816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.276 [2024-06-09 21:03:33.373855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:05.276 [2024-06-09 21:03:33.373916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.276 [2024-06-09 21:03:33.376264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.276 [2024-06-09 21:03:33.376331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:05.276 BaseBdev1 00:21:05.276 21:03:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:05.276 21:03:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:05.276 21:03:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:05.842 BaseBdev2_malloc 00:21:05.842 21:03:33 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:05.842 [2024-06-09 21:03:33.933226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:05.842 [2024-06-09 21:03:33.933327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.842 [2024-06-09 21:03:33.933370] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:05.842 [2024-06-09 21:03:33.933423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.842 [2024-06-09 21:03:33.935934] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.842 [2024-06-09 21:03:33.936003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:05.842 BaseBdev2 00:21:05.843 21:03:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:05.843 21:03:33 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:05.843 21:03:33 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:06.101 BaseBdev3_malloc 00:21:06.101 21:03:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:06.360 [2024-06-09 21:03:34.458127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:06.360 [2024-06-09 21:03:34.458241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.360 [2024-06-09 21:03:34.458285] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:06.360 [2024-06-09 21:03:34.458329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.360 [2024-06-09 21:03:34.460907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.360 [2024-06-09 21:03:34.460980] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:06.360 BaseBdev3 00:21:06.360 21:03:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:06.360 21:03:34 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:06.360 21:03:34 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:06.619 BaseBdev4_malloc 00:21:06.619 21:03:34 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:06.878 [2024-06-09 21:03:34.975424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:06.878 [2024-06-09 21:03:34.975549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.878 [2024-06-09 21:03:34.975588] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:06.878 [2024-06-09 21:03:34.975635] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.878 [2024-06-09 21:03:34.978096] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.878 [2024-06-09 21:03:34.978169] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:06.878 BaseBdev4 00:21:06.878 21:03:34 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:07.136 spare_malloc 00:21:07.136 21:03:35 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:07.395 spare_delay 00:21:07.395 21:03:35 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:07.395 [2024-06-09 21:03:35.561413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:07.395 [2024-06-09 21:03:35.561525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.395 [2024-06-09 21:03:35.561570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:07.395 [2024-06-09 21:03:35.561613] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.396 [2024-06-09 21:03:35.563996] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.396 [2024-06-09 21:03:35.564076] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:07.396 spare 00:21:07.654 21:03:35 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:07.655 [2024-06-09 21:03:35.753541] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.655 [2024-06-09 21:03:35.755546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:07.655 [2024-06-09 21:03:35.755640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.655 [2024-06-09 21:03:35.755698] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:07.655 [2024-06-09 21:03:35.755959] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:21:07.655 [2024-06-09 21:03:35.755996] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:07.655 [2024-06-09 21:03:35.756150] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:07.655 [2024-06-09 21:03:35.756641] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:21:07.655 [2024-06-09 21:03:35.756668] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:21:07.655 [2024-06-09 21:03:35.756925] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.655 21:03:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.914 21:03:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.914 "name": "raid_bdev1", 00:21:07.914 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:07.914 "strip_size_kb": 0, 00:21:07.914 "state": "online", 00:21:07.914 "raid_level": "raid1", 00:21:07.914 "superblock": true, 00:21:07.914 "num_base_bdevs": 4, 00:21:07.914 "num_base_bdevs_discovered": 4, 00:21:07.914 "num_base_bdevs_operational": 4, 00:21:07.914 "base_bdevs_list": [ 00:21:07.914 { 00:21:07.914 "name": "BaseBdev1", 00:21:07.914 "uuid": "a6187c9a-42d6-54a6-b610-dad0018470a7", 00:21:07.914 "is_configured": true, 00:21:07.914 "data_offset": 2048, 00:21:07.914 "data_size": 63488 00:21:07.914 }, 00:21:07.914 { 00:21:07.914 "name": "BaseBdev2", 00:21:07.914 "uuid": "d080f98c-b5a2-5b16-b318-01937a6a2ca0", 00:21:07.914 "is_configured": true, 00:21:07.914 "data_offset": 2048, 00:21:07.914 "data_size": 63488 00:21:07.914 }, 00:21:07.914 { 00:21:07.914 "name": "BaseBdev3", 00:21:07.914 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:07.914 "is_configured": true, 00:21:07.914 "data_offset": 2048, 00:21:07.914 "data_size": 63488 00:21:07.914 }, 00:21:07.914 { 00:21:07.914 "name": "BaseBdev4", 00:21:07.914 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:07.914 "is_configured": true, 00:21:07.914 "data_offset": 2048, 00:21:07.914 "data_size": 63488 00:21:07.914 } 00:21:07.914 ] 00:21:07.914 }' 00:21:07.914 21:03:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.914 21:03:35 -- common/autotest_common.sh@10 -- # set +x 00:21:08.482 21:03:36 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:08.482 21:03:36 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:08.740 [2024-06-09 21:03:36.806056] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.740 21:03:36 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:08.740 21:03:36 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.740 21:03:36 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:08.999 21:03:37 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:08.999 21:03:37 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:08.999 21:03:37 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:08.999 21:03:37 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@12 -- # local i 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.999 21:03:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:09.258 [2024-06-09 21:03:37.281913] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:09.258 /dev/nbd0 00:21:09.258 21:03:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:09.258 21:03:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:09.258 21:03:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:09.258 21:03:37 -- common/autotest_common.sh@857 -- # local i 00:21:09.258 21:03:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:09.258 21:03:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:09.258 21:03:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:09.258 21:03:37 -- common/autotest_common.sh@861 -- # break 00:21:09.258 21:03:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:09.258 21:03:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:09.258 21:03:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:09.258 1+0 records in 00:21:09.258 1+0 records out 00:21:09.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233353 s, 17.6 MB/s 00:21:09.258 21:03:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.258 21:03:37 -- common/autotest_common.sh@874 -- # size=4096 00:21:09.258 21:03:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.258 21:03:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:09.258 21:03:37 -- common/autotest_common.sh@877 -- # return 0 00:21:09.258 21:03:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:09.258 21:03:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:09.258 21:03:37 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:09.258 21:03:37 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:09.258 21:03:37 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:15.818 63488+0 records in 00:21:15.818 63488+0 records out 00:21:15.818 32505856 bytes (33 MB, 31 MiB) copied, 6.32993 s, 5.1 MB/s 00:21:15.818 21:03:43 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@51 -- # local i 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@41 -- # break 00:21:15.818 21:03:43 -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.818 21:03:43 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:15.818 [2024-06-09 21:03:43.912170] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.077 [2024-06-09 21:03:44.103879] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.077 21:03:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.336 21:03:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:16.336 "name": "raid_bdev1", 00:21:16.336 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:16.336 "strip_size_kb": 0, 00:21:16.336 "state": "online", 00:21:16.336 "raid_level": "raid1", 00:21:16.336 "superblock": true, 00:21:16.336 "num_base_bdevs": 4, 00:21:16.336 "num_base_bdevs_discovered": 3, 00:21:16.336 "num_base_bdevs_operational": 3, 00:21:16.336 "base_bdevs_list": [ 00:21:16.336 { 00:21:16.336 "name": null, 00:21:16.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.336 "is_configured": false, 00:21:16.336 "data_offset": 2048, 00:21:16.336 "data_size": 63488 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "name": "BaseBdev2", 00:21:16.336 "uuid": "d080f98c-b5a2-5b16-b318-01937a6a2ca0", 00:21:16.336 "is_configured": true, 00:21:16.336 "data_offset": 2048, 00:21:16.336 "data_size": 63488 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "name": "BaseBdev3", 00:21:16.336 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:16.336 "is_configured": true, 00:21:16.336 "data_offset": 2048, 00:21:16.336 "data_size": 63488 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "name": "BaseBdev4", 00:21:16.336 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:16.336 "is_configured": true, 00:21:16.336 "data_offset": 2048, 00:21:16.336 "data_size": 63488 00:21:16.336 } 00:21:16.336 ] 00:21:16.336 }' 00:21:16.337 21:03:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:16.337 21:03:44 -- common/autotest_common.sh@10 -- # set +x 00:21:16.904 21:03:44 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:17.163 [2024-06-09 21:03:45.120026] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:17.163 [2024-06-09 21:03:45.120069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:17.163 [2024-06-09 21:03:45.130836] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:21:17.163 [2024-06-09 21:03:45.132870] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:17.163 21:03:45 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:18.099 21:03:46 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.099 21:03:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:18.099 21:03:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:18.099 21:03:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:18.099 21:03:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:18.099 21:03:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.099 21:03:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.373 21:03:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.373 "name": "raid_bdev1", 00:21:18.373 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:18.373 "strip_size_kb": 0, 00:21:18.373 "state": "online", 00:21:18.373 "raid_level": "raid1", 00:21:18.373 "superblock": true, 00:21:18.373 "num_base_bdevs": 4, 00:21:18.373 "num_base_bdevs_discovered": 4, 00:21:18.373 "num_base_bdevs_operational": 4, 00:21:18.373 "process": { 00:21:18.373 "type": "rebuild", 00:21:18.373 "target": "spare", 00:21:18.373 "progress": { 00:21:18.373 "blocks": 24576, 00:21:18.373 "percent": 38 00:21:18.373 } 00:21:18.373 }, 00:21:18.373 "base_bdevs_list": [ 00:21:18.373 { 00:21:18.373 "name": "spare", 00:21:18.373 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:18.373 "is_configured": true, 00:21:18.373 "data_offset": 2048, 00:21:18.373 "data_size": 63488 00:21:18.373 }, 00:21:18.373 { 00:21:18.373 "name": "BaseBdev2", 00:21:18.373 "uuid": "d080f98c-b5a2-5b16-b318-01937a6a2ca0", 00:21:18.373 "is_configured": true, 00:21:18.373 "data_offset": 2048, 00:21:18.373 "data_size": 63488 00:21:18.373 }, 00:21:18.373 { 00:21:18.373 "name": "BaseBdev3", 00:21:18.373 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:18.373 "is_configured": true, 00:21:18.373 "data_offset": 2048, 00:21:18.373 "data_size": 63488 00:21:18.373 }, 00:21:18.373 { 00:21:18.373 "name": "BaseBdev4", 00:21:18.373 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:18.373 "is_configured": true, 00:21:18.373 "data_offset": 2048, 00:21:18.373 "data_size": 63488 00:21:18.373 } 00:21:18.373 ] 00:21:18.373 }' 00:21:18.373 21:03:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.373 21:03:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.373 21:03:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.373 21:03:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.373 21:03:46 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:18.664 [2024-06-09 21:03:46.683169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:18.664 [2024-06-09 21:03:46.743149] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:18.664 [2024-06-09 21:03:46.743243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.664 21:03:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.665 21:03:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.665 21:03:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.665 21:03:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.923 21:03:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.923 "name": "raid_bdev1", 00:21:18.923 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:18.923 "strip_size_kb": 0, 00:21:18.923 "state": "online", 00:21:18.923 "raid_level": "raid1", 00:21:18.923 "superblock": true, 00:21:18.923 "num_base_bdevs": 4, 00:21:18.923 "num_base_bdevs_discovered": 3, 00:21:18.923 "num_base_bdevs_operational": 3, 00:21:18.923 "base_bdevs_list": [ 00:21:18.923 { 00:21:18.923 "name": null, 00:21:18.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.923 "is_configured": false, 00:21:18.923 "data_offset": 2048, 00:21:18.923 "data_size": 63488 00:21:18.923 }, 00:21:18.923 { 00:21:18.923 "name": "BaseBdev2", 00:21:18.923 "uuid": "d080f98c-b5a2-5b16-b318-01937a6a2ca0", 00:21:18.923 "is_configured": true, 00:21:18.923 "data_offset": 2048, 00:21:18.923 "data_size": 63488 00:21:18.923 }, 00:21:18.923 { 00:21:18.923 "name": "BaseBdev3", 00:21:18.923 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:18.923 "is_configured": true, 00:21:18.923 "data_offset": 2048, 00:21:18.923 "data_size": 63488 00:21:18.923 }, 00:21:18.923 { 00:21:18.923 "name": "BaseBdev4", 00:21:18.923 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:18.923 "is_configured": true, 00:21:18.923 "data_offset": 2048, 00:21:18.923 "data_size": 63488 00:21:18.923 } 00:21:18.923 ] 00:21:18.923 }' 00:21:18.923 21:03:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.923 21:03:47 -- common/autotest_common.sh@10 -- # set +x 00:21:19.491 21:03:47 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:19.491 21:03:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:19.491 21:03:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:19.491 21:03:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:19.491 21:03:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:19.491 21:03:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.491 21:03:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.750 21:03:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:19.750 "name": "raid_bdev1", 00:21:19.750 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:19.750 "strip_size_kb": 0, 00:21:19.750 "state": "online", 00:21:19.750 "raid_level": "raid1", 00:21:19.750 "superblock": true, 00:21:19.750 "num_base_bdevs": 4, 00:21:19.750 "num_base_bdevs_discovered": 3, 00:21:19.750 "num_base_bdevs_operational": 3, 00:21:19.750 "base_bdevs_list": [ 00:21:19.750 { 00:21:19.750 "name": null, 00:21:19.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.750 "is_configured": false, 00:21:19.750 "data_offset": 2048, 00:21:19.750 "data_size": 63488 00:21:19.750 }, 00:21:19.750 { 00:21:19.750 "name": "BaseBdev2", 00:21:19.750 "uuid": "d080f98c-b5a2-5b16-b318-01937a6a2ca0", 00:21:19.750 "is_configured": true, 00:21:19.750 "data_offset": 2048, 00:21:19.750 "data_size": 63488 00:21:19.750 }, 00:21:19.750 { 00:21:19.750 "name": "BaseBdev3", 00:21:19.750 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:19.750 "is_configured": true, 00:21:19.750 "data_offset": 2048, 00:21:19.750 "data_size": 63488 00:21:19.750 }, 00:21:19.750 { 00:21:19.750 "name": "BaseBdev4", 00:21:19.750 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:19.750 "is_configured": true, 00:21:19.750 "data_offset": 2048, 00:21:19.750 "data_size": 63488 00:21:19.750 } 00:21:19.750 ] 00:21:19.750 }' 00:21:19.750 21:03:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:19.750 21:03:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:19.750 21:03:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:19.750 21:03:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:19.750 21:03:47 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.009 [2024-06-09 21:03:48.054262] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:20.009 [2024-06-09 21:03:48.054307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.009 [2024-06-09 21:03:48.064326] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:21:20.009 [2024-06-09 21:03:48.066299] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:20.009 21:03:48 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:20.945 21:03:49 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.945 21:03:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.945 21:03:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:20.945 21:03:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:20.945 21:03:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.945 21:03:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.945 21:03:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.204 21:03:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:21.204 "name": "raid_bdev1", 00:21:21.204 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:21.204 "strip_size_kb": 0, 00:21:21.204 "state": "online", 00:21:21.204 "raid_level": "raid1", 00:21:21.204 "superblock": true, 00:21:21.204 "num_base_bdevs": 4, 00:21:21.204 "num_base_bdevs_discovered": 4, 00:21:21.204 "num_base_bdevs_operational": 4, 00:21:21.204 "process": { 00:21:21.204 "type": "rebuild", 00:21:21.204 "target": "spare", 00:21:21.204 "progress": { 00:21:21.204 "blocks": 24576, 00:21:21.204 "percent": 38 00:21:21.204 } 00:21:21.204 }, 00:21:21.204 "base_bdevs_list": [ 00:21:21.204 { 00:21:21.204 "name": "spare", 00:21:21.204 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:21.204 "is_configured": true, 00:21:21.204 "data_offset": 2048, 00:21:21.204 "data_size": 63488 00:21:21.204 }, 00:21:21.204 { 00:21:21.204 "name": "BaseBdev2", 00:21:21.204 "uuid": "d080f98c-b5a2-5b16-b318-01937a6a2ca0", 00:21:21.204 "is_configured": true, 00:21:21.204 "data_offset": 2048, 00:21:21.204 "data_size": 63488 00:21:21.204 }, 00:21:21.204 { 00:21:21.204 "name": "BaseBdev3", 00:21:21.204 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:21.204 "is_configured": true, 00:21:21.204 "data_offset": 2048, 00:21:21.204 "data_size": 63488 00:21:21.204 }, 00:21:21.204 { 00:21:21.204 "name": "BaseBdev4", 00:21:21.204 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:21.204 "is_configured": true, 00:21:21.204 "data_offset": 2048, 00:21:21.204 "data_size": 63488 00:21:21.204 } 00:21:21.204 ] 00:21:21.204 }' 00:21:21.204 21:03:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:21.204 21:03:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.204 21:03:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:21.463 21:03:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.463 21:03:49 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:21.463 21:03:49 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:21.463 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:21.463 21:03:49 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:21.463 21:03:49 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:21.463 21:03:49 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:21.463 21:03:49 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:21.463 [2024-06-09 21:03:49.632888] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:21.721 [2024-06-09 21:03:49.676341] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:21:21.721 21:03:49 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.722 21:03:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:21.980 "name": "raid_bdev1", 00:21:21.980 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:21.980 "strip_size_kb": 0, 00:21:21.980 "state": "online", 00:21:21.980 "raid_level": "raid1", 00:21:21.980 "superblock": true, 00:21:21.980 "num_base_bdevs": 4, 00:21:21.980 "num_base_bdevs_discovered": 3, 00:21:21.980 "num_base_bdevs_operational": 3, 00:21:21.980 "process": { 00:21:21.980 "type": "rebuild", 00:21:21.980 "target": "spare", 00:21:21.980 "progress": { 00:21:21.980 "blocks": 38912, 00:21:21.980 "percent": 61 00:21:21.980 } 00:21:21.980 }, 00:21:21.980 "base_bdevs_list": [ 00:21:21.980 { 00:21:21.980 "name": "spare", 00:21:21.980 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:21.980 "is_configured": true, 00:21:21.980 "data_offset": 2048, 00:21:21.980 "data_size": 63488 00:21:21.980 }, 00:21:21.980 { 00:21:21.980 "name": null, 00:21:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.980 "is_configured": false, 00:21:21.980 "data_offset": 2048, 00:21:21.980 "data_size": 63488 00:21:21.980 }, 00:21:21.980 { 00:21:21.980 "name": "BaseBdev3", 00:21:21.980 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:21.980 "is_configured": true, 00:21:21.980 "data_offset": 2048, 00:21:21.980 "data_size": 63488 00:21:21.980 }, 00:21:21.980 { 00:21:21.980 "name": "BaseBdev4", 00:21:21.980 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:21.980 "is_configured": true, 00:21:21.980 "data_offset": 2048, 00:21:21.980 "data_size": 63488 00:21:21.980 } 00:21:21.980 ] 00:21:21.980 }' 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@657 -- # local timeout=502 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:21.980 21:03:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:21.981 21:03:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.981 21:03:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.239 21:03:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.239 "name": "raid_bdev1", 00:21:22.239 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:22.239 "strip_size_kb": 0, 00:21:22.239 "state": "online", 00:21:22.239 "raid_level": "raid1", 00:21:22.239 "superblock": true, 00:21:22.239 "num_base_bdevs": 4, 00:21:22.239 "num_base_bdevs_discovered": 3, 00:21:22.239 "num_base_bdevs_operational": 3, 00:21:22.239 "process": { 00:21:22.239 "type": "rebuild", 00:21:22.239 "target": "spare", 00:21:22.239 "progress": { 00:21:22.239 "blocks": 45056, 00:21:22.239 "percent": 70 00:21:22.239 } 00:21:22.239 }, 00:21:22.239 "base_bdevs_list": [ 00:21:22.239 { 00:21:22.239 "name": "spare", 00:21:22.239 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:22.239 "is_configured": true, 00:21:22.239 "data_offset": 2048, 00:21:22.239 "data_size": 63488 00:21:22.239 }, 00:21:22.239 { 00:21:22.239 "name": null, 00:21:22.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.239 "is_configured": false, 00:21:22.239 "data_offset": 2048, 00:21:22.239 "data_size": 63488 00:21:22.239 }, 00:21:22.239 { 00:21:22.239 "name": "BaseBdev3", 00:21:22.239 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:22.240 "is_configured": true, 00:21:22.240 "data_offset": 2048, 00:21:22.240 "data_size": 63488 00:21:22.240 }, 00:21:22.240 { 00:21:22.240 "name": "BaseBdev4", 00:21:22.240 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:22.240 "is_configured": true, 00:21:22.240 "data_offset": 2048, 00:21:22.240 "data_size": 63488 00:21:22.240 } 00:21:22.240 ] 00:21:22.240 }' 00:21:22.240 21:03:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.240 21:03:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.240 21:03:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.498 21:03:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.498 21:03:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:23.065 [2024-06-09 21:03:51.197867] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:23.065 [2024-06-09 21:03:51.197974] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:23.065 [2024-06-09 21:03:51.198149] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.323 21:03:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.582 21:03:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.582 "name": "raid_bdev1", 00:21:23.582 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:23.582 "strip_size_kb": 0, 00:21:23.582 "state": "online", 00:21:23.582 "raid_level": "raid1", 00:21:23.582 "superblock": true, 00:21:23.582 "num_base_bdevs": 4, 00:21:23.582 "num_base_bdevs_discovered": 3, 00:21:23.582 "num_base_bdevs_operational": 3, 00:21:23.582 "base_bdevs_list": [ 00:21:23.582 { 00:21:23.582 "name": "spare", 00:21:23.582 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:23.582 "is_configured": true, 00:21:23.582 "data_offset": 2048, 00:21:23.582 "data_size": 63488 00:21:23.582 }, 00:21:23.582 { 00:21:23.582 "name": null, 00:21:23.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.582 "is_configured": false, 00:21:23.582 "data_offset": 2048, 00:21:23.582 "data_size": 63488 00:21:23.582 }, 00:21:23.582 { 00:21:23.582 "name": "BaseBdev3", 00:21:23.582 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:23.582 "is_configured": true, 00:21:23.582 "data_offset": 2048, 00:21:23.582 "data_size": 63488 00:21:23.582 }, 00:21:23.582 { 00:21:23.582 "name": "BaseBdev4", 00:21:23.582 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:23.582 "is_configured": true, 00:21:23.582 "data_offset": 2048, 00:21:23.582 "data_size": 63488 00:21:23.582 } 00:21:23.582 ] 00:21:23.582 }' 00:21:23.582 21:03:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.582 21:03:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:23.582 21:03:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@660 -- # break 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.840 21:03:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.098 "name": "raid_bdev1", 00:21:24.098 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:24.098 "strip_size_kb": 0, 00:21:24.098 "state": "online", 00:21:24.098 "raid_level": "raid1", 00:21:24.098 "superblock": true, 00:21:24.098 "num_base_bdevs": 4, 00:21:24.098 "num_base_bdevs_discovered": 3, 00:21:24.098 "num_base_bdevs_operational": 3, 00:21:24.098 "base_bdevs_list": [ 00:21:24.098 { 00:21:24.098 "name": "spare", 00:21:24.098 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:24.098 "is_configured": true, 00:21:24.098 "data_offset": 2048, 00:21:24.098 "data_size": 63488 00:21:24.098 }, 00:21:24.098 { 00:21:24.098 "name": null, 00:21:24.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.098 "is_configured": false, 00:21:24.098 "data_offset": 2048, 00:21:24.098 "data_size": 63488 00:21:24.098 }, 00:21:24.098 { 00:21:24.098 "name": "BaseBdev3", 00:21:24.098 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:24.098 "is_configured": true, 00:21:24.098 "data_offset": 2048, 00:21:24.098 "data_size": 63488 00:21:24.098 }, 00:21:24.098 { 00:21:24.098 "name": "BaseBdev4", 00:21:24.098 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:24.098 "is_configured": true, 00:21:24.098 "data_offset": 2048, 00:21:24.098 "data_size": 63488 00:21:24.098 } 00:21:24.098 ] 00:21:24.098 }' 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.098 21:03:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.356 21:03:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:24.356 "name": "raid_bdev1", 00:21:24.356 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:24.356 "strip_size_kb": 0, 00:21:24.356 "state": "online", 00:21:24.356 "raid_level": "raid1", 00:21:24.356 "superblock": true, 00:21:24.356 "num_base_bdevs": 4, 00:21:24.356 "num_base_bdevs_discovered": 3, 00:21:24.356 "num_base_bdevs_operational": 3, 00:21:24.356 "base_bdevs_list": [ 00:21:24.356 { 00:21:24.356 "name": "spare", 00:21:24.356 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:24.356 "is_configured": true, 00:21:24.356 "data_offset": 2048, 00:21:24.356 "data_size": 63488 00:21:24.356 }, 00:21:24.356 { 00:21:24.356 "name": null, 00:21:24.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.356 "is_configured": false, 00:21:24.356 "data_offset": 2048, 00:21:24.356 "data_size": 63488 00:21:24.356 }, 00:21:24.356 { 00:21:24.356 "name": "BaseBdev3", 00:21:24.356 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:24.356 "is_configured": true, 00:21:24.356 "data_offset": 2048, 00:21:24.356 "data_size": 63488 00:21:24.356 }, 00:21:24.356 { 00:21:24.356 "name": "BaseBdev4", 00:21:24.356 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:24.356 "is_configured": true, 00:21:24.356 "data_offset": 2048, 00:21:24.356 "data_size": 63488 00:21:24.356 } 00:21:24.356 ] 00:21:24.356 }' 00:21:24.356 21:03:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:24.356 21:03:52 -- common/autotest_common.sh@10 -- # set +x 00:21:24.922 21:03:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:25.180 [2024-06-09 21:03:53.250172] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.180 [2024-06-09 21:03:53.250224] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:25.180 [2024-06-09 21:03:53.250321] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.180 [2024-06-09 21:03:53.250416] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.180 [2024-06-09 21:03:53.250428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:21:25.180 21:03:53 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.180 21:03:53 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:25.439 21:03:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:25.439 21:03:53 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:25.439 21:03:53 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@12 -- # local i 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.439 21:03:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:25.697 /dev/nbd0 00:21:25.697 21:03:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:25.697 21:03:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:25.697 21:03:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:25.697 21:03:53 -- common/autotest_common.sh@857 -- # local i 00:21:25.697 21:03:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:25.697 21:03:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:25.697 21:03:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:25.697 21:03:53 -- common/autotest_common.sh@861 -- # break 00:21:25.697 21:03:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:25.697 21:03:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:25.697 21:03:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.697 1+0 records in 00:21:25.697 1+0 records out 00:21:25.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520425 s, 7.9 MB/s 00:21:25.697 21:03:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.697 21:03:53 -- common/autotest_common.sh@874 -- # size=4096 00:21:25.697 21:03:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.697 21:03:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:25.697 21:03:53 -- common/autotest_common.sh@877 -- # return 0 00:21:25.697 21:03:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.697 21:03:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.697 21:03:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:25.956 /dev/nbd1 00:21:25.956 21:03:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:25.956 21:03:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:25.956 21:03:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:25.956 21:03:54 -- common/autotest_common.sh@857 -- # local i 00:21:25.956 21:03:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:25.956 21:03:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:25.956 21:03:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:25.956 21:03:54 -- common/autotest_common.sh@861 -- # break 00:21:25.956 21:03:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:25.956 21:03:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:25.956 21:03:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.956 1+0 records in 00:21:25.956 1+0 records out 00:21:25.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548768 s, 7.5 MB/s 00:21:25.956 21:03:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.956 21:03:54 -- common/autotest_common.sh@874 -- # size=4096 00:21:25.956 21:03:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.956 21:03:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:25.956 21:03:54 -- common/autotest_common.sh@877 -- # return 0 00:21:25.956 21:03:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.956 21:03:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.956 21:03:54 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:26.214 21:03:54 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:26.214 21:03:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:26.214 21:03:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:26.214 21:03:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:26.214 21:03:54 -- bdev/nbd_common.sh@51 -- # local i 00:21:26.214 21:03:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.214 21:03:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@41 -- # break 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:26.473 21:03:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:26.732 21:03:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:26.732 21:03:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:26.732 21:03:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.732 21:03:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.732 21:03:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:26.732 21:03:54 -- bdev/nbd_common.sh@41 -- # break 00:21:26.732 21:03:54 -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.732 21:03:54 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:26.732 21:03:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:26.732 21:03:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:26.732 21:03:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:26.732 21:03:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:26.991 [2024-06-09 21:03:55.150567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:26.991 [2024-06-09 21:03:55.150656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.991 [2024-06-09 21:03:55.150700] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:26.991 [2024-06-09 21:03:55.150723] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.991 [2024-06-09 21:03:55.152839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.991 [2024-06-09 21:03:55.152899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:26.991 [2024-06-09 21:03:55.152999] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:26.991 [2024-06-09 21:03:55.153061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.991 BaseBdev1 00:21:26.991 21:03:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:26.991 21:03:55 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:26.991 21:03:55 -- bdev/bdev_raid.sh@696 -- # continue 00:21:26.991 21:03:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:26.991 21:03:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:26.991 21:03:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:27.250 21:03:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:27.510 [2024-06-09 21:03:55.526621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:27.510 [2024-06-09 21:03:55.526678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.510 [2024-06-09 21:03:55.526710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:27.510 [2024-06-09 21:03:55.526729] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.510 [2024-06-09 21:03:55.527192] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.510 [2024-06-09 21:03:55.527258] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:27.510 [2024-06-09 21:03:55.527368] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:27.510 [2024-06-09 21:03:55.527395] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:27.510 [2024-06-09 21:03:55.527402] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:27.510 [2024-06-09 21:03:55.527419] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:21:27.510 [2024-06-09 21:03:55.527482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:27.510 BaseBdev3 00:21:27.510 21:03:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:27.510 21:03:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:27.510 21:03:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:27.783 21:03:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:28.058 [2024-06-09 21:03:55.966711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:28.058 [2024-06-09 21:03:55.966792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.058 [2024-06-09 21:03:55.966840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:28.058 [2024-06-09 21:03:55.966868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.058 [2024-06-09 21:03:55.967329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.058 [2024-06-09 21:03:55.967384] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:28.058 [2024-06-09 21:03:55.967459] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:28.058 [2024-06-09 21:03:55.967480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:28.058 BaseBdev4 00:21:28.058 21:03:55 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:28.058 21:03:56 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:28.317 [2024-06-09 21:03:56.410815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.317 [2024-06-09 21:03:56.410873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.317 [2024-06-09 21:03:56.410905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:28.317 [2024-06-09 21:03:56.410932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.317 [2024-06-09 21:03:56.411329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.317 [2024-06-09 21:03:56.411385] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.317 [2024-06-09 21:03:56.411474] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:28.317 [2024-06-09 21:03:56.411513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.317 spare 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.317 21:03:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.576 [2024-06-09 21:03:56.511614] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:21:28.576 [2024-06-09 21:03:56.511634] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:28.576 [2024-06-09 21:03:56.511748] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:21:28.576 [2024-06-09 21:03:56.512098] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:21:28.576 [2024-06-09 21:03:56.512118] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:21:28.576 [2024-06-09 21:03:56.512234] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.576 21:03:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.576 "name": "raid_bdev1", 00:21:28.576 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:28.576 "strip_size_kb": 0, 00:21:28.576 "state": "online", 00:21:28.576 "raid_level": "raid1", 00:21:28.576 "superblock": true, 00:21:28.576 "num_base_bdevs": 4, 00:21:28.576 "num_base_bdevs_discovered": 3, 00:21:28.576 "num_base_bdevs_operational": 3, 00:21:28.576 "base_bdevs_list": [ 00:21:28.576 { 00:21:28.576 "name": "spare", 00:21:28.576 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:28.576 "is_configured": true, 00:21:28.576 "data_offset": 2048, 00:21:28.576 "data_size": 63488 00:21:28.576 }, 00:21:28.576 { 00:21:28.576 "name": null, 00:21:28.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.576 "is_configured": false, 00:21:28.576 "data_offset": 2048, 00:21:28.576 "data_size": 63488 00:21:28.576 }, 00:21:28.576 { 00:21:28.576 "name": "BaseBdev3", 00:21:28.576 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:28.576 "is_configured": true, 00:21:28.576 "data_offset": 2048, 00:21:28.576 "data_size": 63488 00:21:28.576 }, 00:21:28.576 { 00:21:28.576 "name": "BaseBdev4", 00:21:28.576 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:28.576 "is_configured": true, 00:21:28.576 "data_offset": 2048, 00:21:28.576 "data_size": 63488 00:21:28.576 } 00:21:28.576 ] 00:21:28.576 }' 00:21:28.576 21:03:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.576 21:03:56 -- common/autotest_common.sh@10 -- # set +x 00:21:29.143 21:03:57 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.143 21:03:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.144 21:03:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:29.144 21:03:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:29.144 21:03:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.144 21:03:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.144 21:03:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.402 21:03:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.402 "name": "raid_bdev1", 00:21:29.402 "uuid": "b4346a55-fa37-4c59-80d6-abe867291ed6", 00:21:29.402 "strip_size_kb": 0, 00:21:29.402 "state": "online", 00:21:29.402 "raid_level": "raid1", 00:21:29.402 "superblock": true, 00:21:29.402 "num_base_bdevs": 4, 00:21:29.402 "num_base_bdevs_discovered": 3, 00:21:29.402 "num_base_bdevs_operational": 3, 00:21:29.402 "base_bdevs_list": [ 00:21:29.402 { 00:21:29.402 "name": "spare", 00:21:29.402 "uuid": "97324c8e-0580-5d9b-8dbe-3403855116fd", 00:21:29.402 "is_configured": true, 00:21:29.402 "data_offset": 2048, 00:21:29.402 "data_size": 63488 00:21:29.402 }, 00:21:29.402 { 00:21:29.402 "name": null, 00:21:29.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.402 "is_configured": false, 00:21:29.402 "data_offset": 2048, 00:21:29.402 "data_size": 63488 00:21:29.402 }, 00:21:29.402 { 00:21:29.402 "name": "BaseBdev3", 00:21:29.402 "uuid": "760e80e7-09c7-58cd-a0b5-1db57ad953df", 00:21:29.402 "is_configured": true, 00:21:29.402 "data_offset": 2048, 00:21:29.402 "data_size": 63488 00:21:29.402 }, 00:21:29.402 { 00:21:29.402 "name": "BaseBdev4", 00:21:29.402 "uuid": "d39ea075-c9e1-5787-9589-04225129947e", 00:21:29.402 "is_configured": true, 00:21:29.402 "data_offset": 2048, 00:21:29.402 "data_size": 63488 00:21:29.402 } 00:21:29.402 ] 00:21:29.402 }' 00:21:29.402 21:03:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.402 21:03:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:29.402 21:03:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.661 21:03:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:29.661 21:03:57 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.661 21:03:57 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:29.920 21:03:57 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.920 21:03:57 -- bdev/bdev_raid.sh@709 -- # killprocess 124547 00:21:29.920 21:03:57 -- common/autotest_common.sh@926 -- # '[' -z 124547 ']' 00:21:29.920 21:03:57 -- common/autotest_common.sh@930 -- # kill -0 124547 00:21:29.920 21:03:57 -- common/autotest_common.sh@931 -- # uname 00:21:29.920 21:03:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:29.920 21:03:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124547 00:21:29.920 killing process with pid 124547 00:21:29.920 Received shutdown signal, test time was about 60.000000 seconds 00:21:29.920 00:21:29.920 Latency(us) 00:21:29.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.920 =================================================================================================================== 00:21:29.920 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:29.920 21:03:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:29.920 21:03:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:29.920 21:03:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124547' 00:21:29.920 21:03:57 -- common/autotest_common.sh@945 -- # kill 124547 00:21:29.920 [2024-06-09 21:03:57.918732] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.920 21:03:57 -- common/autotest_common.sh@950 -- # wait 124547 00:21:29.920 [2024-06-09 21:03:57.918808] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.920 [2024-06-09 21:03:57.918874] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.920 [2024-06-09 21:03:57.918885] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:21:30.179 [2024-06-09 21:03:58.255857] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.115 ************************************ 00:21:31.115 END TEST raid_rebuild_test_sb 00:21:31.115 ************************************ 00:21:31.115 21:03:59 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:31.115 00:21:31.115 real 0m27.368s 00:21:31.115 user 0m39.791s 00:21:31.115 sys 0m4.173s 00:21:31.115 21:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.115 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:21:31.373 21:03:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:31.373 21:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:31.373 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:21:31.373 ************************************ 00:21:31.373 START TEST raid_rebuild_test_io 00:21:31.373 ************************************ 00:21:31.373 21:03:59 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@544 -- # raid_pid=125203 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:31.373 21:03:59 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125203 /var/tmp/spdk-raid.sock 00:21:31.373 21:03:59 -- common/autotest_common.sh@819 -- # '[' -z 125203 ']' 00:21:31.373 21:03:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:31.373 21:03:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:31.374 21:03:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:31.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:31.374 21:03:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:31.374 21:03:59 -- common/autotest_common.sh@10 -- # set +x 00:21:31.374 [2024-06-09 21:03:59.398889] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:31.374 [2024-06-09 21:03:59.399036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125203 ] 00:21:31.374 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:31.374 Zero copy mechanism will not be used. 00:21:31.631 [2024-06-09 21:03:59.552435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.631 [2024-06-09 21:03:59.754275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.889 [2024-06-09 21:03:59.943271] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.455 21:04:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:32.455 21:04:00 -- common/autotest_common.sh@852 -- # return 0 00:21:32.455 21:04:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.455 21:04:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:32.455 21:04:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:32.455 BaseBdev1 00:21:32.455 21:04:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.455 21:04:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:32.455 21:04:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:32.713 BaseBdev2 00:21:32.713 21:04:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.713 21:04:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:32.713 21:04:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:32.971 BaseBdev3 00:21:32.971 21:04:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.971 21:04:01 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:32.971 21:04:01 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:33.229 BaseBdev4 00:21:33.229 21:04:01 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:33.488 spare_malloc 00:21:33.488 21:04:01 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:33.746 spare_delay 00:21:33.746 21:04:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:33.746 [2024-06-09 21:04:01.890687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:33.746 [2024-06-09 21:04:01.890798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.746 [2024-06-09 21:04:01.890838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:33.746 [2024-06-09 21:04:01.890886] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.746 [2024-06-09 21:04:01.893262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.746 [2024-06-09 21:04:01.893319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:33.746 spare 00:21:33.746 21:04:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:34.003 [2024-06-09 21:04:02.090793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.003 [2024-06-09 21:04:02.092799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.003 [2024-06-09 21:04:02.092851] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.003 [2024-06-09 21:04:02.092888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:34.003 [2024-06-09 21:04:02.092960] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:34.003 [2024-06-09 21:04:02.092971] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:34.003 [2024-06-09 21:04:02.093079] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:34.003 [2024-06-09 21:04:02.093424] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:34.003 [2024-06-09 21:04:02.093445] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:34.003 [2024-06-09 21:04:02.093592] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.003 21:04:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.260 21:04:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.260 "name": "raid_bdev1", 00:21:34.260 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:34.260 "strip_size_kb": 0, 00:21:34.260 "state": "online", 00:21:34.260 "raid_level": "raid1", 00:21:34.260 "superblock": false, 00:21:34.260 "num_base_bdevs": 4, 00:21:34.260 "num_base_bdevs_discovered": 4, 00:21:34.260 "num_base_bdevs_operational": 4, 00:21:34.260 "base_bdevs_list": [ 00:21:34.260 { 00:21:34.260 "name": "BaseBdev1", 00:21:34.260 "uuid": "8d4a370b-8df4-4409-95ba-cf348cb93fc5", 00:21:34.260 "is_configured": true, 00:21:34.260 "data_offset": 0, 00:21:34.260 "data_size": 65536 00:21:34.260 }, 00:21:34.260 { 00:21:34.260 "name": "BaseBdev2", 00:21:34.260 "uuid": "9ccae41e-0396-418d-853c-620685db74d7", 00:21:34.260 "is_configured": true, 00:21:34.260 "data_offset": 0, 00:21:34.260 "data_size": 65536 00:21:34.260 }, 00:21:34.260 { 00:21:34.260 "name": "BaseBdev3", 00:21:34.260 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:34.260 "is_configured": true, 00:21:34.260 "data_offset": 0, 00:21:34.260 "data_size": 65536 00:21:34.260 }, 00:21:34.260 { 00:21:34.260 "name": "BaseBdev4", 00:21:34.260 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:34.260 "is_configured": true, 00:21:34.260 "data_offset": 0, 00:21:34.260 "data_size": 65536 00:21:34.260 } 00:21:34.260 ] 00:21:34.260 }' 00:21:34.260 21:04:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.260 21:04:02 -- common/autotest_common.sh@10 -- # set +x 00:21:34.826 21:04:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:34.826 21:04:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:35.083 [2024-06-09 21:04:03.143234] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.083 21:04:03 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:35.083 21:04:03 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.083 21:04:03 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:35.341 21:04:03 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:35.341 21:04:03 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:35.341 21:04:03 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:35.341 21:04:03 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:35.341 [2024-06-09 21:04:03.462313] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:35.341 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:35.341 Zero copy mechanism will not be used. 00:21:35.341 Running I/O for 60 seconds... 00:21:35.600 [2024-06-09 21:04:03.596602] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:35.600 [2024-06-09 21:04:03.597140] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.600 21:04:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.859 21:04:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.859 "name": "raid_bdev1", 00:21:35.859 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:35.859 "strip_size_kb": 0, 00:21:35.859 "state": "online", 00:21:35.859 "raid_level": "raid1", 00:21:35.859 "superblock": false, 00:21:35.859 "num_base_bdevs": 4, 00:21:35.859 "num_base_bdevs_discovered": 3, 00:21:35.859 "num_base_bdevs_operational": 3, 00:21:35.859 "base_bdevs_list": [ 00:21:35.859 { 00:21:35.859 "name": null, 00:21:35.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.859 "is_configured": false, 00:21:35.859 "data_offset": 0, 00:21:35.859 "data_size": 65536 00:21:35.859 }, 00:21:35.859 { 00:21:35.859 "name": "BaseBdev2", 00:21:35.859 "uuid": "9ccae41e-0396-418d-853c-620685db74d7", 00:21:35.859 "is_configured": true, 00:21:35.859 "data_offset": 0, 00:21:35.859 "data_size": 65536 00:21:35.859 }, 00:21:35.859 { 00:21:35.859 "name": "BaseBdev3", 00:21:35.859 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:35.859 "is_configured": true, 00:21:35.859 "data_offset": 0, 00:21:35.859 "data_size": 65536 00:21:35.859 }, 00:21:35.859 { 00:21:35.859 "name": "BaseBdev4", 00:21:35.859 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:35.859 "is_configured": true, 00:21:35.859 "data_offset": 0, 00:21:35.859 "data_size": 65536 00:21:35.859 } 00:21:35.859 ] 00:21:35.859 }' 00:21:35.859 21:04:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.859 21:04:03 -- common/autotest_common.sh@10 -- # set +x 00:21:36.427 21:04:04 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:36.685 [2024-06-09 21:04:04.767001] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:36.685 [2024-06-09 21:04:04.767390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:36.685 [2024-06-09 21:04:04.797128] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:36.685 [2024-06-09 21:04:04.799402] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:36.685 21:04:04 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:36.944 [2024-06-09 21:04:04.908924] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:36.944 [2024-06-09 21:04:04.909573] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:37.203 [2024-06-09 21:04:05.129035] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:37.203 [2024-06-09 21:04:05.129555] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:37.462 [2024-06-09 21:04:05.459173] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:37.462 [2024-06-09 21:04:05.580485] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:37.733 21:04:05 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.733 21:04:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:37.733 21:04:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:37.733 21:04:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:37.733 21:04:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:37.733 21:04:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.733 21:04:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.006 [2024-06-09 21:04:06.018968] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:38.006 21:04:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:38.006 "name": "raid_bdev1", 00:21:38.006 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:38.006 "strip_size_kb": 0, 00:21:38.006 "state": "online", 00:21:38.006 "raid_level": "raid1", 00:21:38.006 "superblock": false, 00:21:38.006 "num_base_bdevs": 4, 00:21:38.006 "num_base_bdevs_discovered": 4, 00:21:38.006 "num_base_bdevs_operational": 4, 00:21:38.006 "process": { 00:21:38.006 "type": "rebuild", 00:21:38.006 "target": "spare", 00:21:38.006 "progress": { 00:21:38.006 "blocks": 16384, 00:21:38.006 "percent": 25 00:21:38.006 } 00:21:38.006 }, 00:21:38.006 "base_bdevs_list": [ 00:21:38.006 { 00:21:38.006 "name": "spare", 00:21:38.006 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:38.006 "is_configured": true, 00:21:38.006 "data_offset": 0, 00:21:38.006 "data_size": 65536 00:21:38.006 }, 00:21:38.006 { 00:21:38.006 "name": "BaseBdev2", 00:21:38.006 "uuid": "9ccae41e-0396-418d-853c-620685db74d7", 00:21:38.006 "is_configured": true, 00:21:38.006 "data_offset": 0, 00:21:38.006 "data_size": 65536 00:21:38.006 }, 00:21:38.006 { 00:21:38.006 "name": "BaseBdev3", 00:21:38.006 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:38.006 "is_configured": true, 00:21:38.006 "data_offset": 0, 00:21:38.006 "data_size": 65536 00:21:38.006 }, 00:21:38.006 { 00:21:38.006 "name": "BaseBdev4", 00:21:38.006 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:38.006 "is_configured": true, 00:21:38.006 "data_offset": 0, 00:21:38.006 "data_size": 65536 00:21:38.006 } 00:21:38.006 ] 00:21:38.006 }' 00:21:38.006 21:04:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:38.006 21:04:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.006 21:04:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:38.006 21:04:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.006 21:04:06 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:38.265 [2024-06-09 21:04:06.359393] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:38.265 [2024-06-09 21:04:06.359784] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:38.524 [2024-06-09 21:04:06.468043] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:38.524 [2024-06-09 21:04:06.471123] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.524 [2024-06-09 21:04:06.498748] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:38.524 21:04:06 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:38.524 21:04:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:38.524 21:04:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:38.524 21:04:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.525 21:04:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.784 21:04:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.784 "name": "raid_bdev1", 00:21:38.784 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:38.784 "strip_size_kb": 0, 00:21:38.784 "state": "online", 00:21:38.784 "raid_level": "raid1", 00:21:38.784 "superblock": false, 00:21:38.784 "num_base_bdevs": 4, 00:21:38.784 "num_base_bdevs_discovered": 3, 00:21:38.784 "num_base_bdevs_operational": 3, 00:21:38.784 "base_bdevs_list": [ 00:21:38.784 { 00:21:38.784 "name": null, 00:21:38.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.784 "is_configured": false, 00:21:38.784 "data_offset": 0, 00:21:38.784 "data_size": 65536 00:21:38.784 }, 00:21:38.784 { 00:21:38.784 "name": "BaseBdev2", 00:21:38.784 "uuid": "9ccae41e-0396-418d-853c-620685db74d7", 00:21:38.784 "is_configured": true, 00:21:38.784 "data_offset": 0, 00:21:38.784 "data_size": 65536 00:21:38.784 }, 00:21:38.784 { 00:21:38.784 "name": "BaseBdev3", 00:21:38.784 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:38.784 "is_configured": true, 00:21:38.784 "data_offset": 0, 00:21:38.784 "data_size": 65536 00:21:38.784 }, 00:21:38.784 { 00:21:38.784 "name": "BaseBdev4", 00:21:38.784 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:38.784 "is_configured": true, 00:21:38.784 "data_offset": 0, 00:21:38.784 "data_size": 65536 00:21:38.784 } 00:21:38.784 ] 00:21:38.784 }' 00:21:38.784 21:04:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.784 21:04:06 -- common/autotest_common.sh@10 -- # set +x 00:21:39.352 21:04:07 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.352 21:04:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:39.352 21:04:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:39.352 21:04:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:39.352 21:04:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:39.352 21:04:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.352 21:04:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.611 21:04:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:39.611 "name": "raid_bdev1", 00:21:39.611 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:39.611 "strip_size_kb": 0, 00:21:39.611 "state": "online", 00:21:39.611 "raid_level": "raid1", 00:21:39.611 "superblock": false, 00:21:39.611 "num_base_bdevs": 4, 00:21:39.611 "num_base_bdevs_discovered": 3, 00:21:39.611 "num_base_bdevs_operational": 3, 00:21:39.611 "base_bdevs_list": [ 00:21:39.611 { 00:21:39.611 "name": null, 00:21:39.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.611 "is_configured": false, 00:21:39.611 "data_offset": 0, 00:21:39.611 "data_size": 65536 00:21:39.611 }, 00:21:39.611 { 00:21:39.611 "name": "BaseBdev2", 00:21:39.611 "uuid": "9ccae41e-0396-418d-853c-620685db74d7", 00:21:39.611 "is_configured": true, 00:21:39.611 "data_offset": 0, 00:21:39.611 "data_size": 65536 00:21:39.611 }, 00:21:39.611 { 00:21:39.611 "name": "BaseBdev3", 00:21:39.611 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:39.611 "is_configured": true, 00:21:39.611 "data_offset": 0, 00:21:39.611 "data_size": 65536 00:21:39.611 }, 00:21:39.611 { 00:21:39.611 "name": "BaseBdev4", 00:21:39.611 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:39.611 "is_configured": true, 00:21:39.611 "data_offset": 0, 00:21:39.611 "data_size": 65536 00:21:39.611 } 00:21:39.611 ] 00:21:39.611 }' 00:21:39.611 21:04:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:39.611 21:04:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:39.611 21:04:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:39.869 21:04:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:39.869 21:04:07 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:39.870 [2024-06-09 21:04:07.997977] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:39.870 [2024-06-09 21:04:07.998360] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:40.128 21:04:08 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:40.128 [2024-06-09 21:04:08.063375] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:40.128 [2024-06-09 21:04:08.065662] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:40.128 [2024-06-09 21:04:08.190998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:40.128 [2024-06-09 21:04:08.192035] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:40.387 [2024-06-09 21:04:08.414102] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:40.387 [2024-06-09 21:04:08.414752] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:40.645 [2024-06-09 21:04:08.735872] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:40.904 [2024-06-09 21:04:08.860229] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:40.904 21:04:09 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:40.904 21:04:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:40.904 21:04:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:40.904 21:04:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:40.904 21:04:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:40.904 21:04:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.904 21:04:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.163 [2024-06-09 21:04:09.191791] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:41.163 [2024-06-09 21:04:09.199476] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:41.163 21:04:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:41.163 "name": "raid_bdev1", 00:21:41.163 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:41.163 "strip_size_kb": 0, 00:21:41.163 "state": "online", 00:21:41.163 "raid_level": "raid1", 00:21:41.163 "superblock": false, 00:21:41.163 "num_base_bdevs": 4, 00:21:41.163 "num_base_bdevs_discovered": 4, 00:21:41.163 "num_base_bdevs_operational": 4, 00:21:41.163 "process": { 00:21:41.163 "type": "rebuild", 00:21:41.163 "target": "spare", 00:21:41.163 "progress": { 00:21:41.163 "blocks": 14336, 00:21:41.163 "percent": 21 00:21:41.163 } 00:21:41.163 }, 00:21:41.163 "base_bdevs_list": [ 00:21:41.163 { 00:21:41.163 "name": "spare", 00:21:41.163 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:41.163 "is_configured": true, 00:21:41.163 "data_offset": 0, 00:21:41.163 "data_size": 65536 00:21:41.163 }, 00:21:41.163 { 00:21:41.163 "name": "BaseBdev2", 00:21:41.163 "uuid": "9ccae41e-0396-418d-853c-620685db74d7", 00:21:41.163 "is_configured": true, 00:21:41.163 "data_offset": 0, 00:21:41.163 "data_size": 65536 00:21:41.163 }, 00:21:41.163 { 00:21:41.163 "name": "BaseBdev3", 00:21:41.163 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:41.163 "is_configured": true, 00:21:41.163 "data_offset": 0, 00:21:41.163 "data_size": 65536 00:21:41.163 }, 00:21:41.163 { 00:21:41.163 "name": "BaseBdev4", 00:21:41.163 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:41.163 "is_configured": true, 00:21:41.163 "data_offset": 0, 00:21:41.163 "data_size": 65536 00:21:41.163 } 00:21:41.163 ] 00:21:41.163 }' 00:21:41.163 21:04:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:41.163 21:04:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.421 21:04:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:41.421 21:04:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.421 21:04:09 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:41.421 21:04:09 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:41.421 21:04:09 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:41.421 21:04:09 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:41.421 21:04:09 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:41.421 [2024-06-09 21:04:09.410411] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:41.680 [2024-06-09 21:04:09.621972] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:41.680 [2024-06-09 21:04:09.739883] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005a00 00:21:41.680 [2024-06-09 21:04:09.740089] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.680 21:04:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.939 21:04:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:41.939 "name": "raid_bdev1", 00:21:41.939 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:41.939 "strip_size_kb": 0, 00:21:41.939 "state": "online", 00:21:41.939 "raid_level": "raid1", 00:21:41.939 "superblock": false, 00:21:41.939 "num_base_bdevs": 4, 00:21:41.939 "num_base_bdevs_discovered": 3, 00:21:41.939 "num_base_bdevs_operational": 3, 00:21:41.939 "process": { 00:21:41.939 "type": "rebuild", 00:21:41.939 "target": "spare", 00:21:41.939 "progress": { 00:21:41.939 "blocks": 20480, 00:21:41.939 "percent": 31 00:21:41.939 } 00:21:41.939 }, 00:21:41.939 "base_bdevs_list": [ 00:21:41.939 { 00:21:41.939 "name": "spare", 00:21:41.939 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:41.939 "is_configured": true, 00:21:41.939 "data_offset": 0, 00:21:41.939 "data_size": 65536 00:21:41.939 }, 00:21:41.939 { 00:21:41.939 "name": null, 00:21:41.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.939 "is_configured": false, 00:21:41.939 "data_offset": 0, 00:21:41.939 "data_size": 65536 00:21:41.939 }, 00:21:41.939 { 00:21:41.939 "name": "BaseBdev3", 00:21:41.939 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:41.939 "is_configured": true, 00:21:41.939 "data_offset": 0, 00:21:41.939 "data_size": 65536 00:21:41.939 }, 00:21:41.939 { 00:21:41.939 "name": "BaseBdev4", 00:21:41.939 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:41.939 "is_configured": true, 00:21:41.939 "data_offset": 0, 00:21:41.939 "data_size": 65536 00:21:41.939 } 00:21:41.939 ] 00:21:41.939 }' 00:21:41.939 21:04:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:41.939 [2024-06-09 21:04:10.002167] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@657 -- # local timeout=522 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.939 21:04:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.198 21:04:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:42.198 "name": "raid_bdev1", 00:21:42.198 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:42.198 "strip_size_kb": 0, 00:21:42.198 "state": "online", 00:21:42.198 "raid_level": "raid1", 00:21:42.198 "superblock": false, 00:21:42.198 "num_base_bdevs": 4, 00:21:42.198 "num_base_bdevs_discovered": 3, 00:21:42.198 "num_base_bdevs_operational": 3, 00:21:42.198 "process": { 00:21:42.198 "type": "rebuild", 00:21:42.198 "target": "spare", 00:21:42.198 "progress": { 00:21:42.198 "blocks": 24576, 00:21:42.198 "percent": 37 00:21:42.198 } 00:21:42.198 }, 00:21:42.198 "base_bdevs_list": [ 00:21:42.198 { 00:21:42.198 "name": "spare", 00:21:42.198 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:42.198 "is_configured": true, 00:21:42.198 "data_offset": 0, 00:21:42.198 "data_size": 65536 00:21:42.198 }, 00:21:42.198 { 00:21:42.198 "name": null, 00:21:42.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.198 "is_configured": false, 00:21:42.198 "data_offset": 0, 00:21:42.198 "data_size": 65536 00:21:42.198 }, 00:21:42.198 { 00:21:42.198 "name": "BaseBdev3", 00:21:42.198 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:42.198 "is_configured": true, 00:21:42.198 "data_offset": 0, 00:21:42.198 "data_size": 65536 00:21:42.198 }, 00:21:42.198 { 00:21:42.198 "name": "BaseBdev4", 00:21:42.199 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:42.199 "is_configured": true, 00:21:42.199 "data_offset": 0, 00:21:42.199 "data_size": 65536 00:21:42.199 } 00:21:42.199 ] 00:21:42.199 }' 00:21:42.199 21:04:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:42.199 21:04:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:42.458 21:04:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:42.458 21:04:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:42.458 21:04:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:42.458 [2024-06-09 21:04:10.450625] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:42.716 [2024-06-09 21:04:10.798511] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:43.284 [2024-06-09 21:04:11.232450] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:43.284 [2024-06-09 21:04:11.355808] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.284 21:04:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.543 21:04:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:43.543 "name": "raid_bdev1", 00:21:43.543 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:43.543 "strip_size_kb": 0, 00:21:43.543 "state": "online", 00:21:43.543 "raid_level": "raid1", 00:21:43.543 "superblock": false, 00:21:43.543 "num_base_bdevs": 4, 00:21:43.543 "num_base_bdevs_discovered": 3, 00:21:43.543 "num_base_bdevs_operational": 3, 00:21:43.543 "process": { 00:21:43.543 "type": "rebuild", 00:21:43.543 "target": "spare", 00:21:43.543 "progress": { 00:21:43.543 "blocks": 45056, 00:21:43.543 "percent": 68 00:21:43.543 } 00:21:43.543 }, 00:21:43.543 "base_bdevs_list": [ 00:21:43.543 { 00:21:43.543 "name": "spare", 00:21:43.543 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:43.543 "is_configured": true, 00:21:43.543 "data_offset": 0, 00:21:43.543 "data_size": 65536 00:21:43.543 }, 00:21:43.543 { 00:21:43.543 "name": null, 00:21:43.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.543 "is_configured": false, 00:21:43.543 "data_offset": 0, 00:21:43.543 "data_size": 65536 00:21:43.543 }, 00:21:43.543 { 00:21:43.543 "name": "BaseBdev3", 00:21:43.543 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:43.543 "is_configured": true, 00:21:43.543 "data_offset": 0, 00:21:43.543 "data_size": 65536 00:21:43.543 }, 00:21:43.543 { 00:21:43.543 "name": "BaseBdev4", 00:21:43.543 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:43.543 "is_configured": true, 00:21:43.543 "data_offset": 0, 00:21:43.543 "data_size": 65536 00:21:43.543 } 00:21:43.543 ] 00:21:43.543 }' 00:21:43.543 21:04:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:43.802 21:04:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:43.802 21:04:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:43.802 21:04:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.802 21:04:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:43.802 [2024-06-09 21:04:11.927043] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:44.369 [2024-06-09 21:04:12.267015] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:44.369 [2024-06-09 21:04:12.267985] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.626 21:04:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.885 [2024-06-09 21:04:12.819782] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:44.885 [2024-06-09 21:04:12.925589] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:44.885 [2024-06-09 21:04:12.928282] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.885 21:04:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:44.885 "name": "raid_bdev1", 00:21:44.885 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:44.885 "strip_size_kb": 0, 00:21:44.885 "state": "online", 00:21:44.885 "raid_level": "raid1", 00:21:44.885 "superblock": false, 00:21:44.885 "num_base_bdevs": 4, 00:21:44.885 "num_base_bdevs_discovered": 3, 00:21:44.885 "num_base_bdevs_operational": 3, 00:21:44.885 "base_bdevs_list": [ 00:21:44.885 { 00:21:44.885 "name": "spare", 00:21:44.885 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:44.885 "is_configured": true, 00:21:44.885 "data_offset": 0, 00:21:44.885 "data_size": 65536 00:21:44.885 }, 00:21:44.885 { 00:21:44.885 "name": null, 00:21:44.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.885 "is_configured": false, 00:21:44.885 "data_offset": 0, 00:21:44.885 "data_size": 65536 00:21:44.885 }, 00:21:44.885 { 00:21:44.885 "name": "BaseBdev3", 00:21:44.885 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:44.885 "is_configured": true, 00:21:44.885 "data_offset": 0, 00:21:44.885 "data_size": 65536 00:21:44.885 }, 00:21:44.885 { 00:21:44.885 "name": "BaseBdev4", 00:21:44.885 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:44.885 "is_configured": true, 00:21:44.885 "data_offset": 0, 00:21:44.885 "data_size": 65536 00:21:44.885 } 00:21:44.885 ] 00:21:44.885 }' 00:21:44.885 21:04:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@660 -- # break 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.144 21:04:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.403 "name": "raid_bdev1", 00:21:45.403 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:45.403 "strip_size_kb": 0, 00:21:45.403 "state": "online", 00:21:45.403 "raid_level": "raid1", 00:21:45.403 "superblock": false, 00:21:45.403 "num_base_bdevs": 4, 00:21:45.403 "num_base_bdevs_discovered": 3, 00:21:45.403 "num_base_bdevs_operational": 3, 00:21:45.403 "base_bdevs_list": [ 00:21:45.403 { 00:21:45.403 "name": "spare", 00:21:45.403 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:45.403 "is_configured": true, 00:21:45.403 "data_offset": 0, 00:21:45.403 "data_size": 65536 00:21:45.403 }, 00:21:45.403 { 00:21:45.403 "name": null, 00:21:45.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.403 "is_configured": false, 00:21:45.403 "data_offset": 0, 00:21:45.403 "data_size": 65536 00:21:45.403 }, 00:21:45.403 { 00:21:45.403 "name": "BaseBdev3", 00:21:45.403 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:45.403 "is_configured": true, 00:21:45.403 "data_offset": 0, 00:21:45.403 "data_size": 65536 00:21:45.403 }, 00:21:45.403 { 00:21:45.403 "name": "BaseBdev4", 00:21:45.403 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:45.403 "is_configured": true, 00:21:45.403 "data_offset": 0, 00:21:45.403 "data_size": 65536 00:21:45.403 } 00:21:45.403 ] 00:21:45.403 }' 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.403 21:04:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.662 21:04:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.662 "name": "raid_bdev1", 00:21:45.662 "uuid": "1a9dce92-96b4-4b69-a47f-e221aa16344a", 00:21:45.662 "strip_size_kb": 0, 00:21:45.662 "state": "online", 00:21:45.662 "raid_level": "raid1", 00:21:45.662 "superblock": false, 00:21:45.662 "num_base_bdevs": 4, 00:21:45.662 "num_base_bdevs_discovered": 3, 00:21:45.662 "num_base_bdevs_operational": 3, 00:21:45.662 "base_bdevs_list": [ 00:21:45.662 { 00:21:45.662 "name": "spare", 00:21:45.662 "uuid": "92eb2537-c3c8-50f5-a710-9d2e96da6c95", 00:21:45.662 "is_configured": true, 00:21:45.662 "data_offset": 0, 00:21:45.662 "data_size": 65536 00:21:45.662 }, 00:21:45.662 { 00:21:45.662 "name": null, 00:21:45.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.662 "is_configured": false, 00:21:45.662 "data_offset": 0, 00:21:45.662 "data_size": 65536 00:21:45.662 }, 00:21:45.662 { 00:21:45.662 "name": "BaseBdev3", 00:21:45.662 "uuid": "7586dc27-2a20-4945-8942-0e6156f4aa95", 00:21:45.662 "is_configured": true, 00:21:45.662 "data_offset": 0, 00:21:45.662 "data_size": 65536 00:21:45.662 }, 00:21:45.662 { 00:21:45.662 "name": "BaseBdev4", 00:21:45.662 "uuid": "73b64d0a-c595-4e59-a813-0ffee8c88a36", 00:21:45.662 "is_configured": true, 00:21:45.662 "data_offset": 0, 00:21:45.662 "data_size": 65536 00:21:45.662 } 00:21:45.662 ] 00:21:45.662 }' 00:21:45.662 21:04:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.662 21:04:13 -- common/autotest_common.sh@10 -- # set +x 00:21:46.229 21:04:14 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:46.488 [2024-06-09 21:04:14.590608] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.488 [2024-06-09 21:04:14.591359] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.488 00:21:46.488 Latency(us) 00:21:46.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.488 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:46.488 raid_bdev1 : 11.20 105.11 315.33 0.00 0.00 13643.30 283.00 111053.73 00:21:46.488 =================================================================================================================== 00:21:46.488 Total : 105.11 315.33 0.00 0.00 13643.30 283.00 111053.73 00:21:46.747 [2024-06-09 21:04:14.679195] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.747 [2024-06-09 21:04:14.679349] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.747 0 00:21:46.747 [2024-06-09 21:04:14.679473] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.747 [2024-06-09 21:04:14.679489] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:46.747 21:04:14 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.747 21:04:14 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:46.747 21:04:14 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:46.747 21:04:14 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:46.747 21:04:14 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@12 -- # local i 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:46.747 21:04:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:47.038 /dev/nbd0 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:47.038 21:04:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:47.038 21:04:15 -- common/autotest_common.sh@857 -- # local i 00:21:47.038 21:04:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:47.038 21:04:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:47.038 21:04:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:47.038 21:04:15 -- common/autotest_common.sh@861 -- # break 00:21:47.038 21:04:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:47.038 21:04:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:47.038 21:04:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.038 1+0 records in 00:21:47.038 1+0 records out 00:21:47.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509936 s, 8.0 MB/s 00:21:47.038 21:04:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.038 21:04:15 -- common/autotest_common.sh@874 -- # size=4096 00:21:47.038 21:04:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.038 21:04:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:47.038 21:04:15 -- common/autotest_common.sh@877 -- # return 0 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.038 21:04:15 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:47.038 21:04:15 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:47.038 21:04:15 -- bdev/bdev_raid.sh@678 -- # continue 00:21:47.038 21:04:15 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:47.038 21:04:15 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:47.038 21:04:15 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@12 -- # local i 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.038 21:04:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:47.306 /dev/nbd1 00:21:47.306 21:04:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:47.306 21:04:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:47.306 21:04:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:47.306 21:04:15 -- common/autotest_common.sh@857 -- # local i 00:21:47.306 21:04:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:47.306 21:04:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:47.306 21:04:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:47.306 21:04:15 -- common/autotest_common.sh@861 -- # break 00:21:47.306 21:04:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:47.306 21:04:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:47.306 21:04:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.306 1+0 records in 00:21:47.306 1+0 records out 00:21:47.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507501 s, 8.1 MB/s 00:21:47.306 21:04:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.306 21:04:15 -- common/autotest_common.sh@874 -- # size=4096 00:21:47.306 21:04:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.306 21:04:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:47.306 21:04:15 -- common/autotest_common.sh@877 -- # return 0 00:21:47.306 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.306 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.306 21:04:15 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:47.565 21:04:15 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:47.565 21:04:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:47.565 21:04:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:47.565 21:04:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:47.565 21:04:15 -- bdev/nbd_common.sh@51 -- # local i 00:21:47.565 21:04:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:47.565 21:04:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@41 -- # break 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@45 -- # return 0 00:21:47.824 21:04:15 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:47.824 21:04:15 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:47.824 21:04:15 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@12 -- # local i 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.824 21:04:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:48.083 /dev/nbd1 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:48.083 21:04:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:48.083 21:04:16 -- common/autotest_common.sh@857 -- # local i 00:21:48.083 21:04:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:48.083 21:04:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:48.083 21:04:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:48.083 21:04:16 -- common/autotest_common.sh@861 -- # break 00:21:48.083 21:04:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:48.083 21:04:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:48.083 21:04:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:48.083 1+0 records in 00:21:48.083 1+0 records out 00:21:48.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504515 s, 8.1 MB/s 00:21:48.083 21:04:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:48.083 21:04:16 -- common/autotest_common.sh@874 -- # size=4096 00:21:48.083 21:04:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:48.083 21:04:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:48.083 21:04:16 -- common/autotest_common.sh@877 -- # return 0 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:48.083 21:04:16 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:48.083 21:04:16 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@51 -- # local i 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:48.083 21:04:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@41 -- # break 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.341 21:04:16 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@51 -- # local i 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:48.341 21:04:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@41 -- # break 00:21:48.600 21:04:16 -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.600 21:04:16 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:48.600 21:04:16 -- bdev/bdev_raid.sh@709 -- # killprocess 125203 00:21:48.600 21:04:16 -- common/autotest_common.sh@926 -- # '[' -z 125203 ']' 00:21:48.600 21:04:16 -- common/autotest_common.sh@930 -- # kill -0 125203 00:21:48.600 21:04:16 -- common/autotest_common.sh@931 -- # uname 00:21:48.600 21:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.600 21:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125203 00:21:48.600 21:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:48.600 21:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:48.600 21:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125203' 00:21:48.600 killing process with pid 125203 00:21:48.600 21:04:16 -- common/autotest_common.sh@945 -- # kill 125203 00:21:48.600 Received shutdown signal, test time was about 13.306979 seconds 00:21:48.600 00:21:48.600 Latency(us) 00:21:48.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.600 =================================================================================================================== 00:21:48.600 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.600 21:04:16 -- common/autotest_common.sh@950 -- # wait 125203 00:21:48.600 [2024-06-09 21:04:16.771940] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.168 [2024-06-09 21:04:17.065640] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:50.104 ************************************ 00:21:50.104 END TEST raid_rebuild_test_io 00:21:50.104 ************************************ 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:50.104 00:21:50.104 real 0m18.821s 00:21:50.104 user 0m29.112s 00:21:50.104 sys 0m2.329s 00:21:50.104 21:04:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.104 21:04:18 -- common/autotest_common.sh@10 -- # set +x 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:21:50.104 21:04:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:50.104 21:04:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:50.104 21:04:18 -- common/autotest_common.sh@10 -- # set +x 00:21:50.104 ************************************ 00:21:50.104 START TEST raid_rebuild_test_sb_io 00:21:50.104 ************************************ 00:21:50.104 21:04:18 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@544 -- # raid_pid=125722 00:21:50.104 21:04:18 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125722 /var/tmp/spdk-raid.sock 00:21:50.105 21:04:18 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:50.105 21:04:18 -- common/autotest_common.sh@819 -- # '[' -z 125722 ']' 00:21:50.105 21:04:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:50.105 21:04:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:50.105 21:04:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:50.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:50.105 21:04:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:50.105 21:04:18 -- common/autotest_common.sh@10 -- # set +x 00:21:50.363 [2024-06-09 21:04:18.299663] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:50.363 [2024-06-09 21:04:18.300067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125722 ] 00:21:50.363 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:50.363 Zero copy mechanism will not be used. 00:21:50.363 [2024-06-09 21:04:18.462910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.622 [2024-06-09 21:04:18.661312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.881 [2024-06-09 21:04:18.855427] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:51.139 21:04:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:51.139 21:04:19 -- common/autotest_common.sh@852 -- # return 0 00:21:51.139 21:04:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:51.139 21:04:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:51.139 21:04:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:51.398 BaseBdev1_malloc 00:21:51.398 21:04:19 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:51.657 [2024-06-09 21:04:19.675587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:51.657 [2024-06-09 21:04:19.675964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.657 [2024-06-09 21:04:19.676053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:51.657 [2024-06-09 21:04:19.676390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.657 [2024-06-09 21:04:19.678850] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.657 [2024-06-09 21:04:19.679028] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:51.657 BaseBdev1 00:21:51.657 21:04:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:51.657 21:04:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:51.657 21:04:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:51.915 BaseBdev2_malloc 00:21:51.916 21:04:19 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:52.174 [2024-06-09 21:04:20.163003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:52.174 [2024-06-09 21:04:20.163345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.174 [2024-06-09 21:04:20.163429] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:52.174 [2024-06-09 21:04:20.163595] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.174 [2024-06-09 21:04:20.165903] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.174 [2024-06-09 21:04:20.166076] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:52.174 BaseBdev2 00:21:52.174 21:04:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:52.174 21:04:20 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:52.174 21:04:20 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:52.432 BaseBdev3_malloc 00:21:52.432 21:04:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:52.690 [2024-06-09 21:04:20.619710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:52.690 [2024-06-09 21:04:20.619950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.690 [2024-06-09 21:04:20.620030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:52.690 [2024-06-09 21:04:20.620179] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.690 [2024-06-09 21:04:20.622427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.690 [2024-06-09 21:04:20.622604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:52.690 BaseBdev3 00:21:52.690 21:04:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:52.690 21:04:20 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:52.690 21:04:20 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:52.949 BaseBdev4_malloc 00:21:52.949 21:04:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:52.949 [2024-06-09 21:04:21.089979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:52.949 [2024-06-09 21:04:21.090255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.949 [2024-06-09 21:04:21.090327] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:52.949 [2024-06-09 21:04:21.090483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.949 [2024-06-09 21:04:21.092882] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.949 [2024-06-09 21:04:21.093050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:52.949 BaseBdev4 00:21:52.949 21:04:21 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:53.206 spare_malloc 00:21:53.206 21:04:21 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:53.465 spare_delay 00:21:53.465 21:04:21 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:53.723 [2024-06-09 21:04:21.808288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:53.723 [2024-06-09 21:04:21.808654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.723 [2024-06-09 21:04:21.808743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:53.723 [2024-06-09 21:04:21.808916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.723 [2024-06-09 21:04:21.811536] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.723 [2024-06-09 21:04:21.811717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:53.723 spare 00:21:53.723 21:04:21 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:53.981 [2024-06-09 21:04:22.008448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.981 [2024-06-09 21:04:22.010499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:53.981 [2024-06-09 21:04:22.010711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:53.981 [2024-06-09 21:04:22.010824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:53.981 [2024-06-09 21:04:22.011166] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:21:53.981 [2024-06-09 21:04:22.011231] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:53.981 [2024-06-09 21:04:22.011455] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:53.981 [2024-06-09 21:04:22.011943] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:21:53.981 [2024-06-09 21:04:22.012071] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:21:53.981 [2024-06-09 21:04:22.012291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.981 21:04:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.239 21:04:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.239 "name": "raid_bdev1", 00:21:54.239 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:21:54.239 "strip_size_kb": 0, 00:21:54.239 "state": "online", 00:21:54.239 "raid_level": "raid1", 00:21:54.239 "superblock": true, 00:21:54.239 "num_base_bdevs": 4, 00:21:54.239 "num_base_bdevs_discovered": 4, 00:21:54.239 "num_base_bdevs_operational": 4, 00:21:54.239 "base_bdevs_list": [ 00:21:54.239 { 00:21:54.239 "name": "BaseBdev1", 00:21:54.239 "uuid": "73d8c36b-f09f-5fb8-afc2-ff62c805d091", 00:21:54.239 "is_configured": true, 00:21:54.239 "data_offset": 2048, 00:21:54.239 "data_size": 63488 00:21:54.239 }, 00:21:54.239 { 00:21:54.239 "name": "BaseBdev2", 00:21:54.239 "uuid": "73045719-d131-5c23-91b0-661bf0c7edb8", 00:21:54.239 "is_configured": true, 00:21:54.239 "data_offset": 2048, 00:21:54.239 "data_size": 63488 00:21:54.239 }, 00:21:54.239 { 00:21:54.239 "name": "BaseBdev3", 00:21:54.239 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:21:54.239 "is_configured": true, 00:21:54.239 "data_offset": 2048, 00:21:54.239 "data_size": 63488 00:21:54.239 }, 00:21:54.239 { 00:21:54.239 "name": "BaseBdev4", 00:21:54.239 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:21:54.239 "is_configured": true, 00:21:54.239 "data_offset": 2048, 00:21:54.239 "data_size": 63488 00:21:54.239 } 00:21:54.239 ] 00:21:54.239 }' 00:21:54.239 21:04:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.239 21:04:22 -- common/autotest_common.sh@10 -- # set +x 00:21:54.804 21:04:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:54.804 21:04:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:55.062 [2024-06-09 21:04:23.076783] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:55.062 21:04:23 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:55.062 21:04:23 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.062 21:04:23 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:55.320 21:04:23 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:55.320 21:04:23 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:55.320 21:04:23 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:55.320 21:04:23 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:55.320 [2024-06-09 21:04:23.391948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:55.320 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:55.320 Zero copy mechanism will not be used. 00:21:55.320 Running I/O for 60 seconds... 00:21:55.320 [2024-06-09 21:04:23.466028] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:55.321 [2024-06-09 21:04:23.472307] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.579 "name": "raid_bdev1", 00:21:55.579 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:21:55.579 "strip_size_kb": 0, 00:21:55.579 "state": "online", 00:21:55.579 "raid_level": "raid1", 00:21:55.579 "superblock": true, 00:21:55.579 "num_base_bdevs": 4, 00:21:55.579 "num_base_bdevs_discovered": 3, 00:21:55.579 "num_base_bdevs_operational": 3, 00:21:55.579 "base_bdevs_list": [ 00:21:55.579 { 00:21:55.579 "name": null, 00:21:55.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.579 "is_configured": false, 00:21:55.579 "data_offset": 2048, 00:21:55.579 "data_size": 63488 00:21:55.579 }, 00:21:55.579 { 00:21:55.579 "name": "BaseBdev2", 00:21:55.579 "uuid": "73045719-d131-5c23-91b0-661bf0c7edb8", 00:21:55.579 "is_configured": true, 00:21:55.579 "data_offset": 2048, 00:21:55.579 "data_size": 63488 00:21:55.579 }, 00:21:55.579 { 00:21:55.579 "name": "BaseBdev3", 00:21:55.579 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:21:55.579 "is_configured": true, 00:21:55.579 "data_offset": 2048, 00:21:55.579 "data_size": 63488 00:21:55.579 }, 00:21:55.579 { 00:21:55.579 "name": "BaseBdev4", 00:21:55.579 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:21:55.579 "is_configured": true, 00:21:55.579 "data_offset": 2048, 00:21:55.579 "data_size": 63488 00:21:55.579 } 00:21:55.579 ] 00:21:55.579 }' 00:21:55.579 21:04:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.579 21:04:23 -- common/autotest_common.sh@10 -- # set +x 00:21:56.514 21:04:24 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:56.514 [2024-06-09 21:04:24.557786] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:56.514 [2024-06-09 21:04:24.558175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:56.514 21:04:24 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:56.514 [2024-06-09 21:04:24.608313] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:56.514 [2024-06-09 21:04:24.610655] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:56.804 [2024-06-09 21:04:24.856211] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:56.804 [2024-06-09 21:04:24.856878] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:57.070 [2024-06-09 21:04:25.219324] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:57.636 [2024-06-09 21:04:25.550981] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:57.637 [2024-06-09 21:04:25.552666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:57.637 21:04:25 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.637 21:04:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.637 21:04:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:57.637 21:04:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:57.637 21:04:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.637 21:04:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.637 21:04:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.895 21:04:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.895 "name": "raid_bdev1", 00:21:57.896 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:21:57.896 "strip_size_kb": 0, 00:21:57.896 "state": "online", 00:21:57.896 "raid_level": "raid1", 00:21:57.896 "superblock": true, 00:21:57.896 "num_base_bdevs": 4, 00:21:57.896 "num_base_bdevs_discovered": 4, 00:21:57.896 "num_base_bdevs_operational": 4, 00:21:57.896 "process": { 00:21:57.896 "type": "rebuild", 00:21:57.896 "target": "spare", 00:21:57.896 "progress": { 00:21:57.896 "blocks": 16384, 00:21:57.896 "percent": 25 00:21:57.896 } 00:21:57.896 }, 00:21:57.896 "base_bdevs_list": [ 00:21:57.896 { 00:21:57.896 "name": "spare", 00:21:57.896 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:21:57.896 "is_configured": true, 00:21:57.896 "data_offset": 2048, 00:21:57.896 "data_size": 63488 00:21:57.896 }, 00:21:57.896 { 00:21:57.896 "name": "BaseBdev2", 00:21:57.896 "uuid": "73045719-d131-5c23-91b0-661bf0c7edb8", 00:21:57.896 "is_configured": true, 00:21:57.896 "data_offset": 2048, 00:21:57.896 "data_size": 63488 00:21:57.896 }, 00:21:57.896 { 00:21:57.896 "name": "BaseBdev3", 00:21:57.896 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:21:57.896 "is_configured": true, 00:21:57.896 "data_offset": 2048, 00:21:57.896 "data_size": 63488 00:21:57.896 }, 00:21:57.896 { 00:21:57.896 "name": "BaseBdev4", 00:21:57.896 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:21:57.896 "is_configured": true, 00:21:57.896 "data_offset": 2048, 00:21:57.896 "data_size": 63488 00:21:57.896 } 00:21:57.896 ] 00:21:57.896 }' 00:21:57.896 21:04:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.896 21:04:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.896 21:04:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.896 21:04:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.896 21:04:25 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:57.896 [2024-06-09 21:04:26.024187] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:58.154 [2024-06-09 21:04:26.196534] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:58.154 [2024-06-09 21:04:26.302721] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:58.154 [2024-06-09 21:04:26.321315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.413 [2024-06-09 21:04:26.349811] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.413 21:04:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.672 21:04:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.672 "name": "raid_bdev1", 00:21:58.672 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:21:58.672 "strip_size_kb": 0, 00:21:58.672 "state": "online", 00:21:58.672 "raid_level": "raid1", 00:21:58.672 "superblock": true, 00:21:58.672 "num_base_bdevs": 4, 00:21:58.672 "num_base_bdevs_discovered": 3, 00:21:58.672 "num_base_bdevs_operational": 3, 00:21:58.672 "base_bdevs_list": [ 00:21:58.672 { 00:21:58.672 "name": null, 00:21:58.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.672 "is_configured": false, 00:21:58.672 "data_offset": 2048, 00:21:58.672 "data_size": 63488 00:21:58.672 }, 00:21:58.672 { 00:21:58.672 "name": "BaseBdev2", 00:21:58.672 "uuid": "73045719-d131-5c23-91b0-661bf0c7edb8", 00:21:58.672 "is_configured": true, 00:21:58.672 "data_offset": 2048, 00:21:58.672 "data_size": 63488 00:21:58.672 }, 00:21:58.672 { 00:21:58.672 "name": "BaseBdev3", 00:21:58.672 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:21:58.672 "is_configured": true, 00:21:58.672 "data_offset": 2048, 00:21:58.672 "data_size": 63488 00:21:58.672 }, 00:21:58.672 { 00:21:58.672 "name": "BaseBdev4", 00:21:58.672 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:21:58.672 "is_configured": true, 00:21:58.672 "data_offset": 2048, 00:21:58.672 "data_size": 63488 00:21:58.672 } 00:21:58.672 ] 00:21:58.672 }' 00:21:58.672 21:04:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.672 21:04:26 -- common/autotest_common.sh@10 -- # set +x 00:21:59.239 21:04:27 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.239 21:04:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:59.239 21:04:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:59.239 21:04:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:59.239 21:04:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:59.239 21:04:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.239 21:04:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.498 21:04:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:59.498 "name": "raid_bdev1", 00:21:59.498 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:21:59.498 "strip_size_kb": 0, 00:21:59.498 "state": "online", 00:21:59.498 "raid_level": "raid1", 00:21:59.498 "superblock": true, 00:21:59.498 "num_base_bdevs": 4, 00:21:59.498 "num_base_bdevs_discovered": 3, 00:21:59.498 "num_base_bdevs_operational": 3, 00:21:59.498 "base_bdevs_list": [ 00:21:59.498 { 00:21:59.498 "name": null, 00:21:59.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.498 "is_configured": false, 00:21:59.498 "data_offset": 2048, 00:21:59.498 "data_size": 63488 00:21:59.498 }, 00:21:59.498 { 00:21:59.498 "name": "BaseBdev2", 00:21:59.498 "uuid": "73045719-d131-5c23-91b0-661bf0c7edb8", 00:21:59.498 "is_configured": true, 00:21:59.498 "data_offset": 2048, 00:21:59.498 "data_size": 63488 00:21:59.498 }, 00:21:59.498 { 00:21:59.498 "name": "BaseBdev3", 00:21:59.498 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:21:59.498 "is_configured": true, 00:21:59.498 "data_offset": 2048, 00:21:59.498 "data_size": 63488 00:21:59.498 }, 00:21:59.498 { 00:21:59.498 "name": "BaseBdev4", 00:21:59.498 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:21:59.498 "is_configured": true, 00:21:59.498 "data_offset": 2048, 00:21:59.498 "data_size": 63488 00:21:59.498 } 00:21:59.498 ] 00:21:59.498 }' 00:21:59.498 21:04:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:59.498 21:04:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:59.498 21:04:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:59.757 21:04:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:59.757 21:04:27 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:59.757 [2024-06-09 21:04:27.915713] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:59.757 [2024-06-09 21:04:27.916090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:00.016 21:04:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:00.016 [2024-06-09 21:04:27.982479] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:00.016 [2024-06-09 21:04:27.984880] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:00.016 [2024-06-09 21:04:28.099807] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:00.016 [2024-06-09 21:04:28.101488] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:00.275 [2024-06-09 21:04:28.306578] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:00.275 [2024-06-09 21:04:28.307239] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:00.533 [2024-06-09 21:04:28.549037] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:00.533 [2024-06-09 21:04:28.550818] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:00.792 [2024-06-09 21:04:28.754501] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:00.792 [2024-06-09 21:04:28.754915] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:01.051 21:04:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.051 21:04:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:01.051 21:04:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:01.051 21:04:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:01.051 21:04:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:01.051 21:04:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.051 21:04:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.051 [2024-06-09 21:04:29.008399] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:01.051 [2024-06-09 21:04:29.134178] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:01.051 21:04:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:01.051 "name": "raid_bdev1", 00:22:01.051 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:01.051 "strip_size_kb": 0, 00:22:01.051 "state": "online", 00:22:01.051 "raid_level": "raid1", 00:22:01.051 "superblock": true, 00:22:01.051 "num_base_bdevs": 4, 00:22:01.051 "num_base_bdevs_discovered": 4, 00:22:01.051 "num_base_bdevs_operational": 4, 00:22:01.051 "process": { 00:22:01.051 "type": "rebuild", 00:22:01.051 "target": "spare", 00:22:01.051 "progress": { 00:22:01.051 "blocks": 16384, 00:22:01.051 "percent": 25 00:22:01.051 } 00:22:01.051 }, 00:22:01.051 "base_bdevs_list": [ 00:22:01.051 { 00:22:01.051 "name": "spare", 00:22:01.051 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:01.051 "is_configured": true, 00:22:01.051 "data_offset": 2048, 00:22:01.051 "data_size": 63488 00:22:01.051 }, 00:22:01.051 { 00:22:01.051 "name": "BaseBdev2", 00:22:01.051 "uuid": "73045719-d131-5c23-91b0-661bf0c7edb8", 00:22:01.051 "is_configured": true, 00:22:01.051 "data_offset": 2048, 00:22:01.051 "data_size": 63488 00:22:01.051 }, 00:22:01.051 { 00:22:01.051 "name": "BaseBdev3", 00:22:01.051 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:01.051 "is_configured": true, 00:22:01.051 "data_offset": 2048, 00:22:01.051 "data_size": 63488 00:22:01.051 }, 00:22:01.051 { 00:22:01.051 "name": "BaseBdev4", 00:22:01.051 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:01.051 "is_configured": true, 00:22:01.052 "data_offset": 2048, 00:22:01.052 "data_size": 63488 00:22:01.052 } 00:22:01.052 ] 00:22:01.052 }' 00:22:01.052 21:04:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:01.310 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:01.310 21:04:29 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:01.568 [2024-06-09 21:04:29.521469] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:01.568 [2024-06-09 21:04:29.523102] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:01.568 [2024-06-09 21:04:29.550197] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:01.826 [2024-06-09 21:04:29.748470] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:01.826 [2024-06-09 21:04:29.851549] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:22:01.827 [2024-06-09 21:04:29.851694] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.827 21:04:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.085 [2024-06-09 21:04:30.102313] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:02.085 [2024-06-09 21:04:30.212266] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:02.085 21:04:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:02.085 "name": "raid_bdev1", 00:22:02.085 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:02.085 "strip_size_kb": 0, 00:22:02.085 "state": "online", 00:22:02.085 "raid_level": "raid1", 00:22:02.085 "superblock": true, 00:22:02.085 "num_base_bdevs": 4, 00:22:02.085 "num_base_bdevs_discovered": 3, 00:22:02.085 "num_base_bdevs_operational": 3, 00:22:02.085 "process": { 00:22:02.085 "type": "rebuild", 00:22:02.085 "target": "spare", 00:22:02.085 "progress": { 00:22:02.085 "blocks": 28672, 00:22:02.085 "percent": 45 00:22:02.085 } 00:22:02.085 }, 00:22:02.085 "base_bdevs_list": [ 00:22:02.085 { 00:22:02.085 "name": "spare", 00:22:02.085 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:02.085 "is_configured": true, 00:22:02.085 "data_offset": 2048, 00:22:02.085 "data_size": 63488 00:22:02.085 }, 00:22:02.085 { 00:22:02.085 "name": null, 00:22:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.085 "is_configured": false, 00:22:02.085 "data_offset": 2048, 00:22:02.085 "data_size": 63488 00:22:02.085 }, 00:22:02.085 { 00:22:02.085 "name": "BaseBdev3", 00:22:02.085 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:02.085 "is_configured": true, 00:22:02.085 "data_offset": 2048, 00:22:02.085 "data_size": 63488 00:22:02.085 }, 00:22:02.085 { 00:22:02.085 "name": "BaseBdev4", 00:22:02.085 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:02.085 "is_configured": true, 00:22:02.085 "data_offset": 2048, 00:22:02.085 "data_size": 63488 00:22:02.085 } 00:22:02.086 ] 00:22:02.086 }' 00:22:02.086 21:04:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@657 -- # local timeout=542 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.344 21:04:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.602 [2024-06-09 21:04:30.537359] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:02.602 21:04:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:02.602 "name": "raid_bdev1", 00:22:02.602 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:02.602 "strip_size_kb": 0, 00:22:02.602 "state": "online", 00:22:02.602 "raid_level": "raid1", 00:22:02.602 "superblock": true, 00:22:02.602 "num_base_bdevs": 4, 00:22:02.602 "num_base_bdevs_discovered": 3, 00:22:02.602 "num_base_bdevs_operational": 3, 00:22:02.602 "process": { 00:22:02.602 "type": "rebuild", 00:22:02.602 "target": "spare", 00:22:02.602 "progress": { 00:22:02.602 "blocks": 32768, 00:22:02.602 "percent": 51 00:22:02.603 } 00:22:02.603 }, 00:22:02.603 "base_bdevs_list": [ 00:22:02.603 { 00:22:02.603 "name": "spare", 00:22:02.603 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:02.603 "is_configured": true, 00:22:02.603 "data_offset": 2048, 00:22:02.603 "data_size": 63488 00:22:02.603 }, 00:22:02.603 { 00:22:02.603 "name": null, 00:22:02.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.603 "is_configured": false, 00:22:02.603 "data_offset": 2048, 00:22:02.603 "data_size": 63488 00:22:02.603 }, 00:22:02.603 { 00:22:02.603 "name": "BaseBdev3", 00:22:02.603 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:02.603 "is_configured": true, 00:22:02.603 "data_offset": 2048, 00:22:02.603 "data_size": 63488 00:22:02.603 }, 00:22:02.603 { 00:22:02.603 "name": "BaseBdev4", 00:22:02.603 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:02.603 "is_configured": true, 00:22:02.603 "data_offset": 2048, 00:22:02.603 "data_size": 63488 00:22:02.603 } 00:22:02.603 ] 00:22:02.603 }' 00:22:02.603 21:04:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:02.603 21:04:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.603 21:04:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:02.603 [2024-06-09 21:04:30.649412] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:02.603 21:04:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.603 21:04:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:02.861 [2024-06-09 21:04:30.902397] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:03.427 [2024-06-09 21:04:31.345829] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:03.685 [2024-06-09 21:04:31.679543] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:03.685 21:04:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:03.685 21:04:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.685 21:04:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:03.685 21:04:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:03.685 21:04:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:03.685 21:04:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:03.685 21:04:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.686 21:04:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.944 21:04:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:03.944 "name": "raid_bdev1", 00:22:03.944 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:03.944 "strip_size_kb": 0, 00:22:03.944 "state": "online", 00:22:03.944 "raid_level": "raid1", 00:22:03.944 "superblock": true, 00:22:03.944 "num_base_bdevs": 4, 00:22:03.944 "num_base_bdevs_discovered": 3, 00:22:03.944 "num_base_bdevs_operational": 3, 00:22:03.944 "process": { 00:22:03.944 "type": "rebuild", 00:22:03.944 "target": "spare", 00:22:03.944 "progress": { 00:22:03.944 "blocks": 53248, 00:22:03.944 "percent": 83 00:22:03.944 } 00:22:03.944 }, 00:22:03.944 "base_bdevs_list": [ 00:22:03.944 { 00:22:03.944 "name": "spare", 00:22:03.944 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:03.944 "is_configured": true, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 }, 00:22:03.944 { 00:22:03.944 "name": null, 00:22:03.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.944 "is_configured": false, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 }, 00:22:03.944 { 00:22:03.944 "name": "BaseBdev3", 00:22:03.944 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:03.944 "is_configured": true, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 }, 00:22:03.944 { 00:22:03.944 "name": "BaseBdev4", 00:22:03.944 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:03.944 "is_configured": true, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 } 00:22:03.944 ] 00:22:03.944 }' 00:22:03.944 21:04:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:03.944 21:04:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:03.944 21:04:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:03.944 21:04:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.944 21:04:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:04.202 [2024-06-09 21:04:32.220983] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:04.460 [2024-06-09 21:04:32.554159] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:04.719 [2024-06-09 21:04:32.654231] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:04.719 [2024-06-09 21:04:32.656991] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.978 21:04:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:05.237 "name": "raid_bdev1", 00:22:05.237 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:05.237 "strip_size_kb": 0, 00:22:05.237 "state": "online", 00:22:05.237 "raid_level": "raid1", 00:22:05.237 "superblock": true, 00:22:05.237 "num_base_bdevs": 4, 00:22:05.237 "num_base_bdevs_discovered": 3, 00:22:05.237 "num_base_bdevs_operational": 3, 00:22:05.237 "base_bdevs_list": [ 00:22:05.237 { 00:22:05.237 "name": "spare", 00:22:05.237 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:05.237 "is_configured": true, 00:22:05.237 "data_offset": 2048, 00:22:05.237 "data_size": 63488 00:22:05.237 }, 00:22:05.237 { 00:22:05.237 "name": null, 00:22:05.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.237 "is_configured": false, 00:22:05.237 "data_offset": 2048, 00:22:05.237 "data_size": 63488 00:22:05.237 }, 00:22:05.237 { 00:22:05.237 "name": "BaseBdev3", 00:22:05.237 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:05.237 "is_configured": true, 00:22:05.237 "data_offset": 2048, 00:22:05.237 "data_size": 63488 00:22:05.237 }, 00:22:05.237 { 00:22:05.237 "name": "BaseBdev4", 00:22:05.237 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:05.237 "is_configured": true, 00:22:05.237 "data_offset": 2048, 00:22:05.237 "data_size": 63488 00:22:05.237 } 00:22:05.237 ] 00:22:05.237 }' 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@660 -- # break 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.237 21:04:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.496 21:04:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:05.496 "name": "raid_bdev1", 00:22:05.496 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:05.496 "strip_size_kb": 0, 00:22:05.496 "state": "online", 00:22:05.496 "raid_level": "raid1", 00:22:05.496 "superblock": true, 00:22:05.496 "num_base_bdevs": 4, 00:22:05.496 "num_base_bdevs_discovered": 3, 00:22:05.496 "num_base_bdevs_operational": 3, 00:22:05.496 "base_bdevs_list": [ 00:22:05.496 { 00:22:05.496 "name": "spare", 00:22:05.496 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:05.496 "is_configured": true, 00:22:05.496 "data_offset": 2048, 00:22:05.496 "data_size": 63488 00:22:05.496 }, 00:22:05.496 { 00:22:05.496 "name": null, 00:22:05.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.496 "is_configured": false, 00:22:05.496 "data_offset": 2048, 00:22:05.496 "data_size": 63488 00:22:05.496 }, 00:22:05.496 { 00:22:05.496 "name": "BaseBdev3", 00:22:05.496 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:05.496 "is_configured": true, 00:22:05.496 "data_offset": 2048, 00:22:05.496 "data_size": 63488 00:22:05.496 }, 00:22:05.496 { 00:22:05.496 "name": "BaseBdev4", 00:22:05.496 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:05.496 "is_configured": true, 00:22:05.496 "data_offset": 2048, 00:22:05.496 "data_size": 63488 00:22:05.496 } 00:22:05.496 ] 00:22:05.496 }' 00:22:05.496 21:04:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.755 21:04:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.014 21:04:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.014 "name": "raid_bdev1", 00:22:06.014 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:06.014 "strip_size_kb": 0, 00:22:06.014 "state": "online", 00:22:06.014 "raid_level": "raid1", 00:22:06.014 "superblock": true, 00:22:06.014 "num_base_bdevs": 4, 00:22:06.014 "num_base_bdevs_discovered": 3, 00:22:06.014 "num_base_bdevs_operational": 3, 00:22:06.014 "base_bdevs_list": [ 00:22:06.014 { 00:22:06.014 "name": "spare", 00:22:06.014 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:06.014 "is_configured": true, 00:22:06.014 "data_offset": 2048, 00:22:06.014 "data_size": 63488 00:22:06.014 }, 00:22:06.014 { 00:22:06.014 "name": null, 00:22:06.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.014 "is_configured": false, 00:22:06.014 "data_offset": 2048, 00:22:06.014 "data_size": 63488 00:22:06.014 }, 00:22:06.014 { 00:22:06.014 "name": "BaseBdev3", 00:22:06.014 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:06.014 "is_configured": true, 00:22:06.014 "data_offset": 2048, 00:22:06.014 "data_size": 63488 00:22:06.014 }, 00:22:06.014 { 00:22:06.014 "name": "BaseBdev4", 00:22:06.014 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:06.014 "is_configured": true, 00:22:06.014 "data_offset": 2048, 00:22:06.014 "data_size": 63488 00:22:06.014 } 00:22:06.014 ] 00:22:06.014 }' 00:22:06.014 21:04:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.014 21:04:33 -- common/autotest_common.sh@10 -- # set +x 00:22:06.584 21:04:34 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:06.843 [2024-06-09 21:04:34.764165] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:06.843 [2024-06-09 21:04:34.764461] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:06.843 00:22:06.843 Latency(us) 00:22:06.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.843 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:06.843 raid_bdev1 : 11.44 108.20 324.60 0.00 0.00 13094.81 286.72 111530.36 00:22:06.843 =================================================================================================================== 00:22:06.843 Total : 108.20 324.60 0.00 0.00 13094.81 286.72 111530.36 00:22:06.843 [2024-06-09 21:04:34.852475] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.843 [2024-06-09 21:04:34.852638] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.843 [2024-06-09 21:04:34.852789] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:06.843 0 00:22:06.843 [2024-06-09 21:04:34.853074] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:06.843 21:04:34 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.843 21:04:34 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:07.102 21:04:35 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:07.102 21:04:35 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:07.102 21:04:35 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@12 -- # local i 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.102 21:04:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:07.361 /dev/nbd0 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:07.361 21:04:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:07.361 21:04:35 -- common/autotest_common.sh@857 -- # local i 00:22:07.361 21:04:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:07.361 21:04:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:07.361 21:04:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:07.361 21:04:35 -- common/autotest_common.sh@861 -- # break 00:22:07.361 21:04:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:07.361 21:04:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:07.361 21:04:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.361 1+0 records in 00:22:07.361 1+0 records out 00:22:07.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550988 s, 7.4 MB/s 00:22:07.361 21:04:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.361 21:04:35 -- common/autotest_common.sh@874 -- # size=4096 00:22:07.361 21:04:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.361 21:04:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:07.361 21:04:35 -- common/autotest_common.sh@877 -- # return 0 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.361 21:04:35 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:07.361 21:04:35 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:07.361 21:04:35 -- bdev/bdev_raid.sh@678 -- # continue 00:22:07.361 21:04:35 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:07.361 21:04:35 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:07.361 21:04:35 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@12 -- # local i 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.361 21:04:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:07.620 /dev/nbd1 00:22:07.620 21:04:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:07.620 21:04:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:07.620 21:04:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:07.620 21:04:35 -- common/autotest_common.sh@857 -- # local i 00:22:07.620 21:04:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:07.620 21:04:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:07.620 21:04:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:07.620 21:04:35 -- common/autotest_common.sh@861 -- # break 00:22:07.620 21:04:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:07.620 21:04:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:07.620 21:04:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:07.620 1+0 records in 00:22:07.620 1+0 records out 00:22:07.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508878 s, 8.0 MB/s 00:22:07.620 21:04:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.620 21:04:35 -- common/autotest_common.sh@874 -- # size=4096 00:22:07.620 21:04:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:07.620 21:04:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:07.620 21:04:35 -- common/autotest_common.sh@877 -- # return 0 00:22:07.620 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:07.620 21:04:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:07.620 21:04:35 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:07.879 21:04:35 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:07.879 21:04:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:07.879 21:04:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:07.879 21:04:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:07.879 21:04:35 -- bdev/nbd_common.sh@51 -- # local i 00:22:07.879 21:04:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.879 21:04:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@41 -- # break 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.138 21:04:36 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:08.138 21:04:36 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:08.138 21:04:36 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:08.138 21:04:36 -- bdev/nbd_common.sh@12 -- # local i 00:22:08.139 21:04:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:08.139 21:04:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:08.139 21:04:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:08.139 /dev/nbd1 00:22:08.397 21:04:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:08.397 21:04:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:08.397 21:04:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:08.397 21:04:36 -- common/autotest_common.sh@857 -- # local i 00:22:08.397 21:04:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:08.398 21:04:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:08.398 21:04:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:08.398 21:04:36 -- common/autotest_common.sh@861 -- # break 00:22:08.398 21:04:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:08.398 21:04:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:08.398 21:04:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:08.398 1+0 records in 00:22:08.398 1+0 records out 00:22:08.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626227 s, 6.5 MB/s 00:22:08.398 21:04:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.398 21:04:36 -- common/autotest_common.sh@874 -- # size=4096 00:22:08.398 21:04:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:08.398 21:04:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:08.398 21:04:36 -- common/autotest_common.sh@877 -- # return 0 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:08.398 21:04:36 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:08.398 21:04:36 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@51 -- # local i 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.398 21:04:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@41 -- # break 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.656 21:04:36 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@51 -- # local i 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.656 21:04:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@41 -- # break 00:22:08.915 21:04:36 -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.915 21:04:36 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:08.915 21:04:36 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:08.915 21:04:36 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:08.915 21:04:37 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:09.174 21:04:37 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:09.432 [2024-06-09 21:04:37.500193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:09.432 [2024-06-09 21:04:37.500312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.432 [2024-06-09 21:04:37.500357] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:09.432 [2024-06-09 21:04:37.500383] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.432 [2024-06-09 21:04:37.502876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.432 [2024-06-09 21:04:37.502951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:09.432 [2024-06-09 21:04:37.503055] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:09.432 [2024-06-09 21:04:37.503119] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.432 BaseBdev1 00:22:09.432 21:04:37 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:09.432 21:04:37 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:09.432 21:04:37 -- bdev/bdev_raid.sh@696 -- # continue 00:22:09.432 21:04:37 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:09.432 21:04:37 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:09.432 21:04:37 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:09.691 21:04:37 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:09.949 [2024-06-09 21:04:37.932342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:09.949 [2024-06-09 21:04:37.932420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.949 [2024-06-09 21:04:37.932462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:09.949 [2024-06-09 21:04:37.932485] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.949 [2024-06-09 21:04:37.932901] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.949 [2024-06-09 21:04:37.932959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:09.949 [2024-06-09 21:04:37.933051] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:09.949 [2024-06-09 21:04:37.933066] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:09.949 [2024-06-09 21:04:37.933073] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:09.949 [2024-06-09 21:04:37.933091] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:09.949 [2024-06-09 21:04:37.933156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.949 BaseBdev3 00:22:09.949 21:04:37 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:09.949 21:04:37 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:09.949 21:04:37 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:10.208 21:04:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:10.208 [2024-06-09 21:04:38.376435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:10.208 [2024-06-09 21:04:38.376527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.208 [2024-06-09 21:04:38.376581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:10.208 [2024-06-09 21:04:38.376615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.208 [2024-06-09 21:04:38.377079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.208 [2024-06-09 21:04:38.377129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:10.208 [2024-06-09 21:04:38.377240] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:10.208 [2024-06-09 21:04:38.377277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:10.208 BaseBdev4 00:22:10.467 21:04:38 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:10.467 21:04:38 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:10.726 [2024-06-09 21:04:38.776560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:10.726 [2024-06-09 21:04:38.776623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.726 [2024-06-09 21:04:38.776667] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:10.726 [2024-06-09 21:04:38.776695] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.726 [2024-06-09 21:04:38.777120] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.726 [2024-06-09 21:04:38.777176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:10.726 [2024-06-09 21:04:38.777293] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:10.726 [2024-06-09 21:04:38.777323] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.726 spare 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.726 21:04:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.726 [2024-06-09 21:04:38.877443] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:22:10.726 [2024-06-09 21:04:38.877480] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:10.726 [2024-06-09 21:04:38.877649] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:22:10.726 [2024-06-09 21:04:38.878029] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:22:10.726 [2024-06-09 21:04:38.878042] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:22:10.726 [2024-06-09 21:04:38.878172] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.985 21:04:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.985 "name": "raid_bdev1", 00:22:10.985 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:10.985 "strip_size_kb": 0, 00:22:10.985 "state": "online", 00:22:10.985 "raid_level": "raid1", 00:22:10.985 "superblock": true, 00:22:10.985 "num_base_bdevs": 4, 00:22:10.985 "num_base_bdevs_discovered": 3, 00:22:10.985 "num_base_bdevs_operational": 3, 00:22:10.985 "base_bdevs_list": [ 00:22:10.985 { 00:22:10.985 "name": "spare", 00:22:10.985 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:10.985 "is_configured": true, 00:22:10.985 "data_offset": 2048, 00:22:10.985 "data_size": 63488 00:22:10.985 }, 00:22:10.985 { 00:22:10.985 "name": null, 00:22:10.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.985 "is_configured": false, 00:22:10.985 "data_offset": 2048, 00:22:10.985 "data_size": 63488 00:22:10.985 }, 00:22:10.985 { 00:22:10.985 "name": "BaseBdev3", 00:22:10.985 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:10.985 "is_configured": true, 00:22:10.985 "data_offset": 2048, 00:22:10.985 "data_size": 63488 00:22:10.985 }, 00:22:10.985 { 00:22:10.985 "name": "BaseBdev4", 00:22:10.985 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:10.985 "is_configured": true, 00:22:10.985 "data_offset": 2048, 00:22:10.985 "data_size": 63488 00:22:10.985 } 00:22:10.985 ] 00:22:10.985 }' 00:22:10.985 21:04:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.985 21:04:39 -- common/autotest_common.sh@10 -- # set +x 00:22:11.552 21:04:39 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:11.552 21:04:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.552 21:04:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:11.552 21:04:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:11.552 21:04:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.552 21:04:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.552 21:04:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.811 21:04:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.811 "name": "raid_bdev1", 00:22:11.811 "uuid": "86fd5ea2-f87d-42ac-913f-edd38c7436b9", 00:22:11.811 "strip_size_kb": 0, 00:22:11.811 "state": "online", 00:22:11.811 "raid_level": "raid1", 00:22:11.811 "superblock": true, 00:22:11.811 "num_base_bdevs": 4, 00:22:11.811 "num_base_bdevs_discovered": 3, 00:22:11.811 "num_base_bdevs_operational": 3, 00:22:11.811 "base_bdevs_list": [ 00:22:11.811 { 00:22:11.811 "name": "spare", 00:22:11.811 "uuid": "d1a4fa1d-0101-5618-a0a2-d65c275913bb", 00:22:11.811 "is_configured": true, 00:22:11.811 "data_offset": 2048, 00:22:11.811 "data_size": 63488 00:22:11.811 }, 00:22:11.811 { 00:22:11.811 "name": null, 00:22:11.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.811 "is_configured": false, 00:22:11.811 "data_offset": 2048, 00:22:11.811 "data_size": 63488 00:22:11.811 }, 00:22:11.811 { 00:22:11.811 "name": "BaseBdev3", 00:22:11.811 "uuid": "de2f4604-16e0-53b9-8eda-fe838eeafd0d", 00:22:11.811 "is_configured": true, 00:22:11.811 "data_offset": 2048, 00:22:11.811 "data_size": 63488 00:22:11.811 }, 00:22:11.811 { 00:22:11.811 "name": "BaseBdev4", 00:22:11.811 "uuid": "128f5015-cf79-5441-8c1e-e54540fe3fce", 00:22:11.811 "is_configured": true, 00:22:11.811 "data_offset": 2048, 00:22:11.811 "data_size": 63488 00:22:11.811 } 00:22:11.811 ] 00:22:11.811 }' 00:22:11.811 21:04:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.811 21:04:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:11.811 21:04:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.811 21:04:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:11.811 21:04:39 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.811 21:04:39 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:12.069 21:04:40 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.069 21:04:40 -- bdev/bdev_raid.sh@709 -- # killprocess 125722 00:22:12.069 21:04:40 -- common/autotest_common.sh@926 -- # '[' -z 125722 ']' 00:22:12.069 21:04:40 -- common/autotest_common.sh@930 -- # kill -0 125722 00:22:12.069 21:04:40 -- common/autotest_common.sh@931 -- # uname 00:22:12.069 21:04:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:12.069 21:04:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125722 00:22:12.069 21:04:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:12.070 21:04:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:12.070 21:04:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125722' 00:22:12.070 killing process with pid 125722 00:22:12.070 Received shutdown signal, test time was about 16.759789 seconds 00:22:12.070 00:22:12.070 Latency(us) 00:22:12.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.070 =================================================================================================================== 00:22:12.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.070 21:04:40 -- common/autotest_common.sh@945 -- # kill 125722 00:22:12.070 [2024-06-09 21:04:40.154104] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:12.070 [2024-06-09 21:04:40.154175] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.070 21:04:40 -- common/autotest_common.sh@950 -- # wait 125722 00:22:12.070 [2024-06-09 21:04:40.154248] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.070 [2024-06-09 21:04:40.154260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:22:12.328 [2024-06-09 21:04:40.456680] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.704 ************************************ 00:22:13.704 END TEST raid_rebuild_test_sb_io 00:22:13.705 ************************************ 00:22:13.705 21:04:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:13.705 00:22:13.705 real 0m23.335s 00:22:13.705 user 0m37.300s 00:22:13.705 sys 0m2.825s 00:22:13.705 21:04:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.705 21:04:41 -- common/autotest_common.sh@10 -- # set +x 00:22:13.705 21:04:41 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:22:13.705 21:04:41 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:22:13.705 ************************************ 00:22:13.705 END TEST bdev_raid 00:22:13.705 ************************************ 00:22:13.705 00:22:13.705 real 8m43.590s 00:22:13.705 user 14m27.059s 00:22:13.705 sys 1m7.473s 00:22:13.705 21:04:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.705 21:04:41 -- common/autotest_common.sh@10 -- # set +x 00:22:13.705 21:04:41 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:22:13.705 21:04:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:13.705 21:04:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:13.705 21:04:41 -- common/autotest_common.sh@10 -- # set +x 00:22:13.705 ************************************ 00:22:13.705 START TEST bdevperf_config 00:22:13.705 ************************************ 00:22:13.705 21:04:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:22:13.705 * Looking for test storage... 00:22:13.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:22:13.705 21:04:41 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:22:13.705 21:04:41 -- bdevperf/common.sh@8 -- # local job_section=global 00:22:13.705 21:04:41 -- bdevperf/common.sh@9 -- # local rw=read 00:22:13.705 21:04:41 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:13.705 21:04:41 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:22:13.705 21:04:41 -- bdevperf/common.sh@13 -- # cat 00:22:13.705 21:04:41 -- bdevperf/common.sh@18 -- # job='[global]' 00:22:13.705 00:22:13.705 21:04:41 -- bdevperf/common.sh@19 -- # echo 00:22:13.705 21:04:41 -- bdevperf/common.sh@20 -- # cat 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@18 -- # create_job job0 00:22:13.705 21:04:41 -- bdevperf/common.sh@8 -- # local job_section=job0 00:22:13.705 21:04:41 -- bdevperf/common.sh@9 -- # local rw= 00:22:13.705 21:04:41 -- bdevperf/common.sh@10 -- # local filename= 00:22:13.705 21:04:41 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:22:13.705 21:04:41 -- bdevperf/common.sh@18 -- # job='[job0]' 00:22:13.705 00:22:13.705 21:04:41 -- bdevperf/common.sh@19 -- # echo 00:22:13.705 21:04:41 -- bdevperf/common.sh@20 -- # cat 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@19 -- # create_job job1 00:22:13.705 21:04:41 -- bdevperf/common.sh@8 -- # local job_section=job1 00:22:13.705 21:04:41 -- bdevperf/common.sh@9 -- # local rw= 00:22:13.705 21:04:41 -- bdevperf/common.sh@10 -- # local filename= 00:22:13.705 21:04:41 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:22:13.705 21:04:41 -- bdevperf/common.sh@18 -- # job='[job1]' 00:22:13.705 00:22:13.705 21:04:41 -- bdevperf/common.sh@19 -- # echo 00:22:13.705 21:04:41 -- bdevperf/common.sh@20 -- # cat 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@20 -- # create_job job2 00:22:13.705 21:04:41 -- bdevperf/common.sh@8 -- # local job_section=job2 00:22:13.705 21:04:41 -- bdevperf/common.sh@9 -- # local rw= 00:22:13.705 21:04:41 -- bdevperf/common.sh@10 -- # local filename= 00:22:13.705 21:04:41 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:22:13.705 21:04:41 -- bdevperf/common.sh@18 -- # job='[job2]' 00:22:13.705 00:22:13.705 21:04:41 -- bdevperf/common.sh@19 -- # echo 00:22:13.705 21:04:41 -- bdevperf/common.sh@20 -- # cat 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@21 -- # create_job job3 00:22:13.705 21:04:41 -- bdevperf/common.sh@8 -- # local job_section=job3 00:22:13.705 21:04:41 -- bdevperf/common.sh@9 -- # local rw= 00:22:13.705 21:04:41 -- bdevperf/common.sh@10 -- # local filename= 00:22:13.705 21:04:41 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:22:13.705 21:04:41 -- bdevperf/common.sh@18 -- # job='[job3]' 00:22:13.705 00:22:13.705 21:04:41 -- bdevperf/common.sh@19 -- # echo 00:22:13.705 21:04:41 -- bdevperf/common.sh@20 -- # cat 00:22:13.705 21:04:41 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:17.913 21:04:45 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-06-09 21:04:41.816177] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:17.913 [2024-06-09 21:04:41.816374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126391 ] 00:22:17.913 Using job config with 4 jobs 00:22:17.913 [2024-06-09 21:04:41.984183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.913 [2024-06-09 21:04:42.198455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.913 cpumask for '\''job0'\'' is too big 00:22:17.913 cpumask for '\''job1'\'' is too big 00:22:17.913 cpumask for '\''job2'\'' is too big 00:22:17.913 cpumask for '\''job3'\'' is too big 00:22:17.913 Running I/O for 2 seconds... 00:22:17.913 00:22:17.913 Latency(us) 00:22:17.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.01 32372.14 31.61 0.00 0.00 7902.57 1482.01 12690.15 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32383.57 31.62 0.00 0.00 7884.78 1370.30 11319.85 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32362.65 31.60 0.00 0.00 7875.43 1444.77 10128.29 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32341.63 31.58 0.00 0.00 7867.75 1534.14 9770.82 00:22:17.913 =================================================================================================================== 00:22:17.913 Total : 129459.99 126.43 0.00 0.00 7882.61 1370.30 12690.15' 00:22:17.913 21:04:45 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-06-09 21:04:41.816177] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:17.913 [2024-06-09 21:04:41.816374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126391 ] 00:22:17.913 Using job config with 4 jobs 00:22:17.913 [2024-06-09 21:04:41.984183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.913 [2024-06-09 21:04:42.198455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.913 cpumask for '\''job0'\'' is too big 00:22:17.913 cpumask for '\''job1'\'' is too big 00:22:17.913 cpumask for '\''job2'\'' is too big 00:22:17.913 cpumask for '\''job3'\'' is too big 00:22:17.913 Running I/O for 2 seconds... 00:22:17.913 00:22:17.913 Latency(us) 00:22:17.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.01 32372.14 31.61 0.00 0.00 7902.57 1482.01 12690.15 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32383.57 31.62 0.00 0.00 7884.78 1370.30 11319.85 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32362.65 31.60 0.00 0.00 7875.43 1444.77 10128.29 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32341.63 31.58 0.00 0.00 7867.75 1534.14 9770.82 00:22:17.913 =================================================================================================================== 00:22:17.913 Total : 129459.99 126.43 0.00 0.00 7882.61 1370.30 12690.15' 00:22:17.913 21:04:45 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:22:17.913 21:04:45 -- bdevperf/common.sh@32 -- # echo '[2024-06-09 21:04:41.816177] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:17.913 [2024-06-09 21:04:41.816374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126391 ] 00:22:17.913 Using job config with 4 jobs 00:22:17.913 [2024-06-09 21:04:41.984183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.913 [2024-06-09 21:04:42.198455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.913 cpumask for '\''job0'\'' is too big 00:22:17.913 cpumask for '\''job1'\'' is too big 00:22:17.913 cpumask for '\''job2'\'' is too big 00:22:17.913 cpumask for '\''job3'\'' is too big 00:22:17.913 Running I/O for 2 seconds... 00:22:17.913 00:22:17.913 Latency(us) 00:22:17.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.01 32372.14 31.61 0.00 0.00 7902.57 1482.01 12690.15 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32383.57 31.62 0.00 0.00 7884.78 1370.30 11319.85 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32362.65 31.60 0.00 0.00 7875.43 1444.77 10128.29 00:22:17.913 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:17.913 Malloc0 : 2.02 32341.63 31.58 0.00 0.00 7867.75 1534.14 9770.82 00:22:17.913 =================================================================================================================== 00:22:17.913 Total : 129459.99 126.43 0.00 0.00 7882.61 1370.30 12690.15' 00:22:17.913 21:04:45 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:22:17.913 21:04:45 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:22:17.913 21:04:45 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:17.913 [2024-06-09 21:04:46.000377] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:17.913 [2024-06-09 21:04:46.000598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126444 ] 00:22:18.172 [2024-06-09 21:04:46.168165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.431 [2024-06-09 21:04:46.389021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.689 cpumask for 'job0' is too big 00:22:18.689 cpumask for 'job1' is too big 00:22:18.689 cpumask for 'job2' is too big 00:22:18.689 cpumask for 'job3' is too big 00:22:21.976 21:04:50 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:22:21.976 Running I/O for 2 seconds... 00:22:21.976 00:22:21.976 Latency(us) 00:22:21.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.976 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:21.976 Malloc0 : 2.01 32438.28 31.68 0.00 0.00 7883.33 1534.14 12392.26 00:22:21.976 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:21.976 Malloc0 : 2.01 32416.62 31.66 0.00 0.00 7874.93 1429.88 10902.81 00:22:21.976 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:21.976 Malloc0 : 2.02 32393.98 31.63 0.00 0.00 7866.52 1444.77 9413.35 00:22:21.976 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:22:21.976 Malloc0 : 2.02 32465.62 31.70 0.00 0.00 7835.41 834.09 9472.93 00:22:21.976 =================================================================================================================== 00:22:21.976 Total : 129714.50 126.67 0.00 0.00 7865.02 834.09 12392.26' 00:22:21.976 21:04:50 -- bdevperf/test_config.sh@27 -- # cleanup 00:22:21.976 21:04:50 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:21.976 21:04:50 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:22:21.976 21:04:50 -- bdevperf/common.sh@8 -- # local job_section=job0 00:22:21.976 21:04:50 -- bdevperf/common.sh@9 -- # local rw=write 00:22:21.976 21:04:50 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:21.976 21:04:50 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:22:21.976 00:22:21.976 21:04:50 -- bdevperf/common.sh@18 -- # job='[job0]' 00:22:21.976 21:04:50 -- bdevperf/common.sh@19 -- # echo 00:22:21.976 21:04:50 -- bdevperf/common.sh@20 -- # cat 00:22:21.976 21:04:50 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:22:21.976 21:04:50 -- bdevperf/common.sh@8 -- # local job_section=job1 00:22:21.976 21:04:50 -- bdevperf/common.sh@9 -- # local rw=write 00:22:21.976 21:04:50 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:21.976 21:04:50 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:22:21.976 21:04:50 -- bdevperf/common.sh@18 -- # job='[job1]' 00:22:21.976 00:22:21.976 21:04:50 -- bdevperf/common.sh@19 -- # echo 00:22:21.976 21:04:50 -- bdevperf/common.sh@20 -- # cat 00:22:21.976 21:04:50 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:22:21.976 21:04:50 -- bdevperf/common.sh@8 -- # local job_section=job2 00:22:21.976 21:04:50 -- bdevperf/common.sh@9 -- # local rw=write 00:22:21.976 21:04:50 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:22:21.976 21:04:50 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:22:21.976 21:04:50 -- bdevperf/common.sh@18 -- # job='[job2]' 00:22:21.976 00:22:21.976 21:04:50 -- bdevperf/common.sh@19 -- # echo 00:22:21.976 21:04:50 -- bdevperf/common.sh@20 -- # cat 00:22:21.976 21:04:50 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:26.166 21:04:54 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-06-09 21:04:50.175373] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:26.167 [2024-06-09 21:04:50.175580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126500 ] 00:22:26.167 Using job config with 3 jobs 00:22:26.167 [2024-06-09 21:04:50.343210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.167 [2024-06-09 21:04:50.554019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.167 cpumask for '\''job0'\'' is too big 00:22:26.167 cpumask for '\''job1'\'' is too big 00:22:26.167 cpumask for '\''job2'\'' is too big 00:22:26.167 Running I/O for 2 seconds... 00:22:26.167 00:22:26.167 Latency(us) 00:22:26.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43402.54 42.39 0.00 0.00 5892.33 1474.56 8817.57 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43373.34 42.36 0.00 0.00 5886.05 1362.85 7804.74 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43344.24 42.33 0.00 0.00 5880.08 1414.98 7804.74 00:22:26.167 =================================================================================================================== 00:22:26.167 Total : 130120.12 127.07 0.00 0.00 5886.15 1362.85 8817.57' 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-06-09 21:04:50.175373] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:26.167 [2024-06-09 21:04:50.175580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126500 ] 00:22:26.167 Using job config with 3 jobs 00:22:26.167 [2024-06-09 21:04:50.343210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.167 [2024-06-09 21:04:50.554019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.167 cpumask for '\''job0'\'' is too big 00:22:26.167 cpumask for '\''job1'\'' is too big 00:22:26.167 cpumask for '\''job2'\'' is too big 00:22:26.167 Running I/O for 2 seconds... 00:22:26.167 00:22:26.167 Latency(us) 00:22:26.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43402.54 42.39 0.00 0.00 5892.33 1474.56 8817.57 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43373.34 42.36 0.00 0.00 5886.05 1362.85 7804.74 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43344.24 42.33 0.00 0.00 5880.08 1414.98 7804.74 00:22:26.167 =================================================================================================================== 00:22:26.167 Total : 130120.12 127.07 0.00 0.00 5886.15 1362.85 8817.57' 00:22:26.167 21:04:54 -- bdevperf/common.sh@32 -- # echo '[2024-06-09 21:04:50.175373] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:26.167 [2024-06-09 21:04:50.175580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126500 ] 00:22:26.167 Using job config with 3 jobs 00:22:26.167 [2024-06-09 21:04:50.343210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.167 [2024-06-09 21:04:50.554019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.167 cpumask for '\''job0'\'' is too big 00:22:26.167 cpumask for '\''job1'\'' is too big 00:22:26.167 cpumask for '\''job2'\'' is too big 00:22:26.167 Running I/O for 2 seconds... 00:22:26.167 00:22:26.167 Latency(us) 00:22:26.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43402.54 42.39 0.00 0.00 5892.33 1474.56 8817.57 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43373.34 42.36 0.00 0.00 5886.05 1362.85 7804.74 00:22:26.167 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:22:26.167 Malloc0 : 2.01 43344.24 42.33 0.00 0.00 5880.08 1414.98 7804.74 00:22:26.167 =================================================================================================================== 00:22:26.167 Total : 130120.12 127.07 0.00 0.00 5886.15 1362.85 8817.57' 00:22:26.167 21:04:54 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:22:26.167 21:04:54 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@35 -- # cleanup 00:22:26.167 21:04:54 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:22:26.167 21:04:54 -- bdevperf/common.sh@8 -- # local job_section=global 00:22:26.167 21:04:54 -- bdevperf/common.sh@9 -- # local rw=rw 00:22:26.167 21:04:54 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:22:26.167 21:04:54 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:22:26.167 21:04:54 -- bdevperf/common.sh@13 -- # cat 00:22:26.167 21:04:54 -- bdevperf/common.sh@18 -- # job='[global]' 00:22:26.167 00:22:26.167 21:04:54 -- bdevperf/common.sh@19 -- # echo 00:22:26.167 21:04:54 -- bdevperf/common.sh@20 -- # cat 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@38 -- # create_job job0 00:22:26.167 21:04:54 -- bdevperf/common.sh@8 -- # local job_section=job0 00:22:26.167 21:04:54 -- bdevperf/common.sh@9 -- # local rw= 00:22:26.167 21:04:54 -- bdevperf/common.sh@10 -- # local filename= 00:22:26.167 21:04:54 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:22:26.167 21:04:54 -- bdevperf/common.sh@18 -- # job='[job0]' 00:22:26.167 21:04:54 -- bdevperf/common.sh@19 -- # echo 00:22:26.167 00:22:26.167 21:04:54 -- bdevperf/common.sh@20 -- # cat 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@39 -- # create_job job1 00:22:26.167 21:04:54 -- bdevperf/common.sh@8 -- # local job_section=job1 00:22:26.167 21:04:54 -- bdevperf/common.sh@9 -- # local rw= 00:22:26.167 21:04:54 -- bdevperf/common.sh@10 -- # local filename= 00:22:26.167 21:04:54 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:22:26.167 00:22:26.167 21:04:54 -- bdevperf/common.sh@18 -- # job='[job1]' 00:22:26.167 21:04:54 -- bdevperf/common.sh@19 -- # echo 00:22:26.167 21:04:54 -- bdevperf/common.sh@20 -- # cat 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@40 -- # create_job job2 00:22:26.167 21:04:54 -- bdevperf/common.sh@8 -- # local job_section=job2 00:22:26.167 21:04:54 -- bdevperf/common.sh@9 -- # local rw= 00:22:26.167 21:04:54 -- bdevperf/common.sh@10 -- # local filename= 00:22:26.167 21:04:54 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:22:26.167 21:04:54 -- bdevperf/common.sh@18 -- # job='[job2]' 00:22:26.167 00:22:26.167 21:04:54 -- bdevperf/common.sh@19 -- # echo 00:22:26.167 21:04:54 -- bdevperf/common.sh@20 -- # cat 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@41 -- # create_job job3 00:22:26.167 21:04:54 -- bdevperf/common.sh@8 -- # local job_section=job3 00:22:26.167 21:04:54 -- bdevperf/common.sh@9 -- # local rw= 00:22:26.167 21:04:54 -- bdevperf/common.sh@10 -- # local filename= 00:22:26.167 21:04:54 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:22:26.167 00:22:26.167 21:04:54 -- bdevperf/common.sh@18 -- # job='[job3]' 00:22:26.167 21:04:54 -- bdevperf/common.sh@19 -- # echo 00:22:26.167 21:04:54 -- bdevperf/common.sh@20 -- # cat 00:22:26.167 21:04:54 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:30.354 21:04:58 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-06-09 21:04:54.365729] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:30.354 [2024-06-09 21:04:54.365912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126560 ] 00:22:30.354 Using job config with 4 jobs 00:22:30.354 [2024-06-09 21:04:54.530974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.354 [2024-06-09 21:04:54.753098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.354 cpumask for '\''job0'\'' is too big 00:22:30.354 cpumask for '\''job1'\'' is too big 00:22:30.354 cpumask for '\''job2'\'' is too big 00:22:30.354 cpumask for '\''job3'\'' is too big 00:22:30.354 Running I/O for 2 seconds... 00:22:30.354 00:22:30.354 Latency(us) 00:22:30.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.354 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.354 Malloc0 : 2.02 15930.16 15.56 0.00 0.00 16057.23 3038.49 25141.99 00:22:30.354 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.354 Malloc1 : 2.03 15916.72 15.54 0.00 0.00 16057.72 3604.48 25022.84 00:22:30.354 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.354 Malloc0 : 2.03 15902.49 15.53 0.00 0.00 16028.25 2949.12 22043.93 00:22:30.354 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.354 Malloc1 : 2.03 15890.95 15.52 0.00 0.00 16026.98 3470.43 22043.93 00:22:30.354 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.354 Malloc0 : 2.03 15879.92 15.51 0.00 0.00 15995.05 3008.70 19065.02 00:22:30.354 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.354 Malloc1 : 2.04 15943.24 15.57 0.00 0.00 15922.01 3440.64 19065.02 00:22:30.354 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.04 15931.80 15.56 0.00 0.00 15890.26 3023.59 18945.86 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.04 15920.21 15.55 0.00 0.00 15891.53 3485.32 18945.86 00:22:30.355 =================================================================================================================== 00:22:30.355 Total : 127315.49 124.33 0.00 0.00 15983.38 2949.12 25141.99' 00:22:30.355 21:04:58 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-06-09 21:04:54.365729] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:30.355 [2024-06-09 21:04:54.365912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126560 ] 00:22:30.355 Using job config with 4 jobs 00:22:30.355 [2024-06-09 21:04:54.530974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.355 [2024-06-09 21:04:54.753098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.355 cpumask for '\''job0'\'' is too big 00:22:30.355 cpumask for '\''job1'\'' is too big 00:22:30.355 cpumask for '\''job2'\'' is too big 00:22:30.355 cpumask for '\''job3'\'' is too big 00:22:30.355 Running I/O for 2 seconds... 00:22:30.355 00:22:30.355 Latency(us) 00:22:30.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.02 15930.16 15.56 0.00 0.00 16057.23 3038.49 25141.99 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.03 15916.72 15.54 0.00 0.00 16057.72 3604.48 25022.84 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.03 15902.49 15.53 0.00 0.00 16028.25 2949.12 22043.93 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.03 15890.95 15.52 0.00 0.00 16026.98 3470.43 22043.93 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.03 15879.92 15.51 0.00 0.00 15995.05 3008.70 19065.02 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.04 15943.24 15.57 0.00 0.00 15922.01 3440.64 19065.02 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.04 15931.80 15.56 0.00 0.00 15890.26 3023.59 18945.86 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.04 15920.21 15.55 0.00 0.00 15891.53 3485.32 18945.86 00:22:30.355 =================================================================================================================== 00:22:30.355 Total : 127315.49 124.33 0.00 0.00 15983.38 2949.12 25141.99' 00:22:30.355 21:04:58 -- bdevperf/common.sh@32 -- # echo '[2024-06-09 21:04:54.365729] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:30.355 [2024-06-09 21:04:54.365912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126560 ] 00:22:30.355 Using job config with 4 jobs 00:22:30.355 [2024-06-09 21:04:54.530974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.355 [2024-06-09 21:04:54.753098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.355 cpumask for '\''job0'\'' is too big 00:22:30.355 cpumask for '\''job1'\'' is too big 00:22:30.355 cpumask for '\''job2'\'' is too big 00:22:30.355 cpumask for '\''job3'\'' is too big 00:22:30.355 Running I/O for 2 seconds... 00:22:30.355 00:22:30.355 Latency(us) 00:22:30.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.02 15930.16 15.56 0.00 0.00 16057.23 3038.49 25141.99 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.03 15916.72 15.54 0.00 0.00 16057.72 3604.48 25022.84 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.03 15902.49 15.53 0.00 0.00 16028.25 2949.12 22043.93 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.03 15890.95 15.52 0.00 0.00 16026.98 3470.43 22043.93 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.03 15879.92 15.51 0.00 0.00 15995.05 3008.70 19065.02 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.04 15943.24 15.57 0.00 0.00 15922.01 3440.64 19065.02 00:22:30.355 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc0 : 2.04 15931.80 15.56 0.00 0.00 15890.26 3023.59 18945.86 00:22:30.355 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:22:30.355 Malloc1 : 2.04 15920.21 15.55 0.00 0.00 15891.53 3485.32 18945.86 00:22:30.355 =================================================================================================================== 00:22:30.355 Total : 127315.49 124.33 0.00 0.00 15983.38 2949.12 25141.99' 00:22:30.355 21:04:58 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:22:30.355 21:04:58 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:22:30.355 21:04:58 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:22:30.355 21:04:58 -- bdevperf/test_config.sh@44 -- # cleanup 00:22:30.355 21:04:58 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:22:30.355 21:04:58 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:30.355 ************************************ 00:22:30.355 END TEST bdevperf_config 00:22:30.355 ************************************ 00:22:30.355 00:22:30.355 real 0m16.849s 00:22:30.355 user 0m14.967s 00:22:30.355 sys 0m1.328s 00:22:30.355 21:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.355 21:04:58 -- common/autotest_common.sh@10 -- # set +x 00:22:30.630 21:04:58 -- spdk/autotest.sh@198 -- # uname -s 00:22:30.630 21:04:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:22:30.630 21:04:58 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:22:30.630 21:04:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:30.630 21:04:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.630 21:04:58 -- common/autotest_common.sh@10 -- # set +x 00:22:30.630 ************************************ 00:22:30.630 START TEST reactor_set_interrupt 00:22:30.630 ************************************ 00:22:30.630 21:04:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:22:30.630 * Looking for test storage... 00:22:30.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.630 21:04:58 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:22:30.630 21:04:58 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:22:30.630 21:04:58 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.630 21:04:58 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.630 21:04:58 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:22:30.630 21:04:58 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:30.630 21:04:58 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:22:30.630 21:04:58 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:22:30.630 21:04:58 -- common/autotest_common.sh@34 -- # set -e 00:22:30.630 21:04:58 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:22:30.630 21:04:58 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:22:30.630 21:04:58 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:22:30.631 21:04:58 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:22:30.631 21:04:58 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:22:30.631 21:04:58 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:22:30.631 21:04:58 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:22:30.631 21:04:58 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:22:30.631 21:04:58 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:22:30.631 21:04:58 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:22:30.631 21:04:58 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:22:30.631 21:04:58 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:22:30.631 21:04:58 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:22:30.631 21:04:58 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:22:30.631 21:04:58 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:22:30.631 21:04:58 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:22:30.631 21:04:58 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:22:30.631 21:04:58 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:22:30.631 21:04:58 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:22:30.631 21:04:58 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:22:30.631 21:04:58 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:22:30.631 21:04:58 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:22:30.631 21:04:58 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:30.631 21:04:58 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:22:30.631 21:04:58 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:22:30.631 21:04:58 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:22:30.631 21:04:58 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:22:30.631 21:04:58 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:22:30.631 21:04:58 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:22:30.631 21:04:58 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:22:30.631 21:04:58 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:22:30.631 21:04:58 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:22:30.631 21:04:58 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:22:30.631 21:04:58 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:22:30.631 21:04:58 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:22:30.631 21:04:58 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:22:30.631 21:04:58 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:22:30.631 21:04:58 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:22:30.631 21:04:58 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:22:30.631 21:04:58 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:22:30.631 21:04:58 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:22:30.631 21:04:58 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:22:30.631 21:04:58 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:22:30.631 21:04:58 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:22:30.631 21:04:58 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:22:30.631 21:04:58 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:22:30.631 21:04:58 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:22:30.631 21:04:58 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:22:30.631 21:04:58 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:22:30.631 21:04:58 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:22:30.631 21:04:58 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:22:30.631 21:04:58 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:22:30.631 21:04:58 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:22:30.631 21:04:58 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:22:30.631 21:04:58 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:22:30.631 21:04:58 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:22:30.631 21:04:58 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:22:30.631 21:04:58 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:22:30.631 21:04:58 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:22:30.631 21:04:58 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:22:30.631 21:04:58 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:22:30.631 21:04:58 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:22:30.631 21:04:58 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:22:30.631 21:04:58 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:22:30.631 21:04:58 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:22:30.631 21:04:58 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:22:30.631 21:04:58 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:22:30.631 21:04:58 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:22:30.631 21:04:58 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:22:30.631 21:04:58 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:22:30.631 21:04:58 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:22:30.631 21:04:58 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:22:30.631 21:04:58 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:22:30.631 21:04:58 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:22:30.631 21:04:58 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:22:30.631 21:04:58 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:22:30.631 21:04:58 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:22:30.631 21:04:58 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:22:30.631 21:04:58 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:22:30.631 21:04:58 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:22:30.631 21:04:58 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:22:30.631 21:04:58 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:22:30.631 21:04:58 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:22:30.631 21:04:58 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:22:30.631 21:04:58 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:22:30.631 21:04:58 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:22:30.631 21:04:58 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:22:30.631 21:04:58 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:22:30.631 21:04:58 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:22:30.631 21:04:58 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:22:30.631 21:04:58 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:22:30.631 21:04:58 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:22:30.631 21:04:58 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:22:30.631 21:04:58 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:22:30.631 21:04:58 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:22:30.631 21:04:58 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:22:30.631 21:04:58 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:22:30.631 21:04:58 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:22:30.631 21:04:58 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:22:30.631 #define SPDK_CONFIG_H 00:22:30.631 #define SPDK_CONFIG_APPS 1 00:22:30.631 #define SPDK_CONFIG_ARCH native 00:22:30.631 #define SPDK_CONFIG_ASAN 1 00:22:30.631 #undef SPDK_CONFIG_AVAHI 00:22:30.631 #undef SPDK_CONFIG_CET 00:22:30.631 #define SPDK_CONFIG_COVERAGE 1 00:22:30.631 #define SPDK_CONFIG_CROSS_PREFIX 00:22:30.631 #undef SPDK_CONFIG_CRYPTO 00:22:30.631 #undef SPDK_CONFIG_CRYPTO_MLX5 00:22:30.631 #undef SPDK_CONFIG_CUSTOMOCF 00:22:30.631 #undef SPDK_CONFIG_DAOS 00:22:30.631 #define SPDK_CONFIG_DAOS_DIR 00:22:30.631 #define SPDK_CONFIG_DEBUG 1 00:22:30.631 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:22:30.631 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:30.631 #define SPDK_CONFIG_DPDK_INC_DIR 00:22:30.631 #define SPDK_CONFIG_DPDK_LIB_DIR 00:22:30.631 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:22:30.631 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:30.631 #define SPDK_CONFIG_EXAMPLES 1 00:22:30.631 #undef SPDK_CONFIG_FC 00:22:30.631 #define SPDK_CONFIG_FC_PATH 00:22:30.631 #define SPDK_CONFIG_FIO_PLUGIN 1 00:22:30.631 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:22:30.631 #undef SPDK_CONFIG_FUSE 00:22:30.631 #undef SPDK_CONFIG_FUZZER 00:22:30.631 #define SPDK_CONFIG_FUZZER_LIB 00:22:30.631 #undef SPDK_CONFIG_GOLANG 00:22:30.631 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:22:30.631 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:22:30.631 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:22:30.631 #undef SPDK_CONFIG_HAVE_LIBBSD 00:22:30.631 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:22:30.631 #define SPDK_CONFIG_IDXD 1 00:22:30.631 #undef SPDK_CONFIG_IDXD_KERNEL 00:22:30.631 #undef SPDK_CONFIG_IPSEC_MB 00:22:30.631 #define SPDK_CONFIG_IPSEC_MB_DIR 00:22:30.631 #define SPDK_CONFIG_ISAL 1 00:22:30.631 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:22:30.631 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:22:30.631 #define SPDK_CONFIG_LIBDIR 00:22:30.631 #undef SPDK_CONFIG_LTO 00:22:30.631 #define SPDK_CONFIG_MAX_LCORES 00:22:30.631 #define SPDK_CONFIG_NVME_CUSE 1 00:22:30.631 #undef SPDK_CONFIG_OCF 00:22:30.631 #define SPDK_CONFIG_OCF_PATH 00:22:30.631 #define SPDK_CONFIG_OPENSSL_PATH 00:22:30.631 #undef SPDK_CONFIG_PGO_CAPTURE 00:22:30.631 #undef SPDK_CONFIG_PGO_USE 00:22:30.631 #define SPDK_CONFIG_PREFIX /usr/local 00:22:30.631 #undef SPDK_CONFIG_RAID5F 00:22:30.631 #undef SPDK_CONFIG_RBD 00:22:30.631 #define SPDK_CONFIG_RDMA 1 00:22:30.631 #define SPDK_CONFIG_RDMA_PROV verbs 00:22:30.631 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:22:30.631 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:22:30.631 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:22:30.631 #undef SPDK_CONFIG_SHARED 00:22:30.631 #undef SPDK_CONFIG_SMA 00:22:30.631 #define SPDK_CONFIG_TESTS 1 00:22:30.631 #undef SPDK_CONFIG_TSAN 00:22:30.631 #undef SPDK_CONFIG_UBLK 00:22:30.631 #define SPDK_CONFIG_UBSAN 1 00:22:30.631 #define SPDK_CONFIG_UNIT_TESTS 1 00:22:30.631 #undef SPDK_CONFIG_URING 00:22:30.631 #define SPDK_CONFIG_URING_PATH 00:22:30.631 #undef SPDK_CONFIG_URING_ZNS 00:22:30.631 #undef SPDK_CONFIG_USDT 00:22:30.631 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:22:30.631 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:22:30.631 #undef SPDK_CONFIG_VFIO_USER 00:22:30.631 #define SPDK_CONFIG_VFIO_USER_DIR 00:22:30.631 #define SPDK_CONFIG_VHOST 1 00:22:30.631 #define SPDK_CONFIG_VIRTIO 1 00:22:30.632 #undef SPDK_CONFIG_VTUNE 00:22:30.632 #define SPDK_CONFIG_VTUNE_DIR 00:22:30.632 #define SPDK_CONFIG_WERROR 1 00:22:30.632 #define SPDK_CONFIG_WPDK_DIR 00:22:30.632 #undef SPDK_CONFIG_XNVME 00:22:30.632 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:22:30.632 21:04:58 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:22:30.632 21:04:58 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.632 21:04:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.632 21:04:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.632 21:04:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.632 21:04:58 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:30.632 21:04:58 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:30.632 21:04:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:30.632 21:04:58 -- paths/export.sh@5 -- # export PATH 00:22:30.632 21:04:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:30.632 21:04:58 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:22:30.632 21:04:58 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:22:30.632 21:04:58 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:22:30.632 21:04:58 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:22:30.632 21:04:58 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:22:30.632 21:04:58 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:22:30.632 21:04:58 -- pm/common@16 -- # TEST_TAG=N/A 00:22:30.632 21:04:58 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:22:30.632 21:04:58 -- common/autotest_common.sh@52 -- # : 1 00:22:30.632 21:04:58 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:22:30.632 21:04:58 -- common/autotest_common.sh@56 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:22:30.632 21:04:58 -- common/autotest_common.sh@58 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:22:30.632 21:04:58 -- common/autotest_common.sh@60 -- # : 1 00:22:30.632 21:04:58 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:22:30.632 21:04:58 -- common/autotest_common.sh@62 -- # : 1 00:22:30.632 21:04:58 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:22:30.632 21:04:58 -- common/autotest_common.sh@64 -- # : 00:22:30.632 21:04:58 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:22:30.632 21:04:58 -- common/autotest_common.sh@66 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:22:30.632 21:04:58 -- common/autotest_common.sh@68 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:22:30.632 21:04:58 -- common/autotest_common.sh@70 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:22:30.632 21:04:58 -- common/autotest_common.sh@72 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:22:30.632 21:04:58 -- common/autotest_common.sh@74 -- # : 1 00:22:30.632 21:04:58 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:22:30.632 21:04:58 -- common/autotest_common.sh@76 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:22:30.632 21:04:58 -- common/autotest_common.sh@78 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:22:30.632 21:04:58 -- common/autotest_common.sh@80 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:22:30.632 21:04:58 -- common/autotest_common.sh@82 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:22:30.632 21:04:58 -- common/autotest_common.sh@84 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:22:30.632 21:04:58 -- common/autotest_common.sh@86 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:22:30.632 21:04:58 -- common/autotest_common.sh@88 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:22:30.632 21:04:58 -- common/autotest_common.sh@90 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:22:30.632 21:04:58 -- common/autotest_common.sh@92 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:22:30.632 21:04:58 -- common/autotest_common.sh@94 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:22:30.632 21:04:58 -- common/autotest_common.sh@96 -- # : rdma 00:22:30.632 21:04:58 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:22:30.632 21:04:58 -- common/autotest_common.sh@98 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:22:30.632 21:04:58 -- common/autotest_common.sh@100 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:22:30.632 21:04:58 -- common/autotest_common.sh@102 -- # : 1 00:22:30.632 21:04:58 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:22:30.632 21:04:58 -- common/autotest_common.sh@104 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:22:30.632 21:04:58 -- common/autotest_common.sh@106 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:22:30.632 21:04:58 -- common/autotest_common.sh@108 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:22:30.632 21:04:58 -- common/autotest_common.sh@110 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:22:30.632 21:04:58 -- common/autotest_common.sh@112 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:22:30.632 21:04:58 -- common/autotest_common.sh@114 -- # : 1 00:22:30.632 21:04:58 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:22:30.632 21:04:58 -- common/autotest_common.sh@116 -- # : 1 00:22:30.632 21:04:58 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:22:30.632 21:04:58 -- common/autotest_common.sh@118 -- # : 00:22:30.632 21:04:58 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:22:30.632 21:04:58 -- common/autotest_common.sh@120 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:22:30.632 21:04:58 -- common/autotest_common.sh@122 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:22:30.632 21:04:58 -- common/autotest_common.sh@124 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:22:30.632 21:04:58 -- common/autotest_common.sh@126 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:22:30.632 21:04:58 -- common/autotest_common.sh@128 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:22:30.632 21:04:58 -- common/autotest_common.sh@130 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:22:30.632 21:04:58 -- common/autotest_common.sh@132 -- # : 00:22:30.632 21:04:58 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:22:30.632 21:04:58 -- common/autotest_common.sh@134 -- # : true 00:22:30.632 21:04:58 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:22:30.632 21:04:58 -- common/autotest_common.sh@136 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:22:30.632 21:04:58 -- common/autotest_common.sh@138 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:22:30.632 21:04:58 -- common/autotest_common.sh@140 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:22:30.632 21:04:58 -- common/autotest_common.sh@142 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:22:30.632 21:04:58 -- common/autotest_common.sh@144 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:22:30.632 21:04:58 -- common/autotest_common.sh@146 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:22:30.632 21:04:58 -- common/autotest_common.sh@148 -- # : 00:22:30.632 21:04:58 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:22:30.632 21:04:58 -- common/autotest_common.sh@150 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:22:30.632 21:04:58 -- common/autotest_common.sh@152 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:22:30.632 21:04:58 -- common/autotest_common.sh@154 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:22:30.632 21:04:58 -- common/autotest_common.sh@156 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:22:30.632 21:04:58 -- common/autotest_common.sh@158 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:22:30.632 21:04:58 -- common/autotest_common.sh@160 -- # : 0 00:22:30.632 21:04:58 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:22:30.632 21:04:58 -- common/autotest_common.sh@163 -- # : 00:22:30.632 21:04:58 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:22:30.633 21:04:58 -- common/autotest_common.sh@165 -- # : 0 00:22:30.633 21:04:58 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:22:30.633 21:04:58 -- common/autotest_common.sh@167 -- # : 0 00:22:30.633 21:04:58 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:22:30.633 21:04:58 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:30.633 21:04:58 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:22:30.633 21:04:58 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:22:30.633 21:04:58 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:30.633 21:04:58 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:30.633 21:04:58 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:22:30.633 21:04:58 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:22:30.633 21:04:58 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:30.633 21:04:58 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:30.633 21:04:58 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:30.633 21:04:58 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:30.633 21:04:58 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:22:30.633 21:04:58 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:22:30.633 21:04:58 -- common/autotest_common.sh@196 -- # cat 00:22:30.633 21:04:58 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:22:30.633 21:04:58 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:30.633 21:04:58 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:30.633 21:04:58 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:30.633 21:04:58 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:30.633 21:04:58 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:22:30.633 21:04:58 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:22:30.633 21:04:58 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:22:30.633 21:04:58 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:22:30.633 21:04:58 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:22:30.633 21:04:58 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:22:30.633 21:04:58 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:22:30.633 21:04:58 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:22:30.633 21:04:58 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:22:30.633 21:04:58 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:22:30.633 21:04:58 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:22:30.633 21:04:58 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:22:30.633 21:04:58 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:30.633 21:04:58 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:30.633 21:04:58 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:22:30.633 21:04:58 -- common/autotest_common.sh@249 -- # export valgrind= 00:22:30.633 21:04:58 -- common/autotest_common.sh@249 -- # valgrind= 00:22:30.633 21:04:58 -- common/autotest_common.sh@255 -- # uname -s 00:22:30.633 21:04:58 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:22:30.633 21:04:58 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:22:30.633 21:04:58 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:22:30.633 21:04:58 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:22:30.633 21:04:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:22:30.633 21:04:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:22:30.633 21:04:58 -- common/autotest_common.sh@265 -- # MAKE=make 00:22:30.633 21:04:58 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:22:30.633 21:04:58 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:22:30.633 21:04:58 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:22:30.633 21:04:58 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:22:30.633 21:04:58 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:22:30.633 21:04:58 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:22:30.633 21:04:58 -- common/autotest_common.sh@309 -- # [[ -z 126654 ]] 00:22:30.633 21:04:58 -- common/autotest_common.sh@309 -- # kill -0 126654 00:22:30.633 21:04:58 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:22:30.633 21:04:58 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:22:30.633 21:04:58 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:22:30.633 21:04:58 -- common/autotest_common.sh@322 -- # local mount target_dir 00:22:30.633 21:04:58 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:22:30.633 21:04:58 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:22:30.633 21:04:58 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:22:30.633 21:04:58 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:22:30.633 21:04:58 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.FOOQkx 00:22:30.633 21:04:58 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:22:30.633 21:04:58 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:22:30.633 21:04:58 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:22:30.633 21:04:58 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.FOOQkx/tests/interrupt /tmp/spdk.FOOQkx 00:22:30.633 21:04:58 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:22:30.633 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.633 21:04:58 -- common/autotest_common.sh@318 -- # df -T 00:22:30.633 21:04:58 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:22:30.633 21:04:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:22:30.633 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=10274385920 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:22:30.633 21:04:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=10325630976 00:22:30.633 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265802752 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268395520 00:22:30.633 21:04:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:22:30.633 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:22:30.633 21:04:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:22:30.633 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:22:30.633 21:04:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:22:30.633 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:22:30.633 21:04:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:22:30.633 21:04:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:22:30.634 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.634 21:04:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:22:30.634 21:04:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:22:30.634 21:04:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=97191063552 00:22:30.634 21:04:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:22:30.634 21:04:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=2511716352 00:22:30.634 21:04:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:30.634 21:04:58 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:22:30.634 * Looking for test storage... 00:22:30.634 21:04:58 -- common/autotest_common.sh@359 -- # local target_space new_size 00:22:30.634 21:04:58 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:22:30.634 21:04:58 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.634 21:04:58 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:22:30.634 21:04:58 -- common/autotest_common.sh@363 -- # mount=/ 00:22:30.634 21:04:58 -- common/autotest_common.sh@365 -- # target_space=10274385920 00:22:30.634 21:04:58 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:22:30.634 21:04:58 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:22:30.634 21:04:58 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:22:30.634 21:04:58 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:22:30.634 21:04:58 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:22:30.634 21:04:58 -- common/autotest_common.sh@372 -- # new_size=12540223488 00:22:30.634 21:04:58 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:22:30.634 21:04:58 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.634 21:04:58 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.634 21:04:58 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:30.634 21:04:58 -- common/autotest_common.sh@380 -- # return 0 00:22:30.634 21:04:58 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:22:30.634 21:04:58 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:22:30.634 21:04:58 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:22:30.634 21:04:58 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:22:30.634 21:04:58 -- common/autotest_common.sh@1672 -- # true 00:22:30.634 21:04:58 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:22:30.634 21:04:58 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:22:30.634 21:04:58 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:22:30.634 21:04:58 -- common/autotest_common.sh@27 -- # exec 00:22:30.634 21:04:58 -- common/autotest_common.sh@29 -- # exec 00:22:30.634 21:04:58 -- common/autotest_common.sh@31 -- # xtrace_restore 00:22:30.634 21:04:58 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:22:30.634 21:04:58 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:22:30.634 21:04:58 -- common/autotest_common.sh@18 -- # set -x 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:22:30.634 21:04:58 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:22:30.634 21:04:58 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:22:30.634 21:04:58 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=126700 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:22:30.634 21:04:58 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 126700 /var/tmp/spdk.sock 00:22:30.634 21:04:58 -- common/autotest_common.sh@819 -- # '[' -z 126700 ']' 00:22:30.634 21:04:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.634 21:04:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:30.634 21:04:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.634 21:04:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:30.634 21:04:58 -- common/autotest_common.sh@10 -- # set +x 00:22:30.905 [2024-06-09 21:04:58.805367] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:30.905 [2024-06-09 21:04:58.806090] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126700 ] 00:22:30.905 [2024-06-09 21:04:58.973321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:31.164 [2024-06-09 21:04:59.156552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.164 [2024-06-09 21:04:59.156659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.164 [2024-06-09 21:04:59.156679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.423 [2024-06-09 21:04:59.440490] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:31.682 21:04:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:31.682 21:04:59 -- common/autotest_common.sh@852 -- # return 0 00:22:31.682 21:04:59 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:22:31.682 21:04:59 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:31.940 Malloc0 00:22:31.940 Malloc1 00:22:31.940 Malloc2 00:22:31.940 21:05:00 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:22:31.940 21:05:00 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:22:31.940 21:05:00 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:22:31.940 21:05:00 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:22:31.940 5000+0 records in 00:22:31.941 5000+0 records out 00:22:31.941 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0135529 s, 756 MB/s 00:22:31.941 21:05:00 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:22:32.199 AIO0 00:22:32.199 21:05:00 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 126700 00:22:32.199 21:05:00 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 126700 without_thd 00:22:32.199 21:05:00 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=126700 00:22:32.199 21:05:00 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:22:32.199 21:05:00 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:22:32.199 21:05:00 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:22:32.199 21:05:00 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:22:32.199 21:05:00 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:22:32.199 21:05:00 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:22:32.199 21:05:00 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:32.199 21:05:00 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:22:32.199 21:05:00 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:32.458 21:05:00 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:22:32.458 21:05:00 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:22:32.458 21:05:00 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:22:32.458 21:05:00 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:22:32.458 21:05:00 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:22:32.458 21:05:00 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:22:32.458 21:05:00 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:32.458 21:05:00 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:22:32.458 21:05:00 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:22:32.716 21:05:00 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:22:32.716 spdk_thread ids are 1 on reactor0. 00:22:32.716 21:05:00 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:22:32.716 21:05:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:22:32.716 21:05:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 126700 0 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126700 0 idle 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@33 -- # local pid=126700 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126700 -w 256 00:22:32.716 21:05:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126700 root 20 0 20.1t 145852 28852 S 0.0 1.2 0:00.70 reactor_0' 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@48 -- # echo 126700 root 20 0 20.1t 145852 28852 S 0.0 1.2 0:00.70 reactor_0 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:32.974 21:05:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:22:32.974 21:05:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 126700 1 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126700 1 idle 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@33 -- # local pid=126700 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:22:32.974 21:05:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126700 -w 256 00:22:32.974 21:05:01 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126703 root 20 0 20.1t 145852 28852 S 0.0 1.2 0:00.00 reactor_1' 00:22:32.974 21:05:01 -- interrupt/interrupt_common.sh@48 -- # echo 126703 root 20 0 20.1t 145852 28852 S 0.0 1.2 0:00.00 reactor_1 00:22:32.974 21:05:01 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:32.974 21:05:01 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:33.233 21:05:01 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:22:33.233 21:05:01 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 126700 2 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126700 2 idle 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@33 -- # local pid=126700 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126700 -w 256 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126705 root 20 0 20.1t 145852 28852 S 0.0 1.2 0:00.00 reactor_2' 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@48 -- # echo 126705 root 20 0 20.1t 145852 28852 S 0.0 1.2 0:00.00 reactor_2 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:33.233 21:05:01 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:33.233 21:05:01 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:22:33.233 21:05:01 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:22:33.233 21:05:01 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:22:33.490 [2024-06-09 21:05:01.589110] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:33.490 21:05:01 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:22:33.749 [2024-06-09 21:05:01.824900] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:22:33.749 [2024-06-09 21:05:01.825459] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:33.749 21:05:01 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:22:34.007 [2024-06-09 21:05:02.080771] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:22:34.007 [2024-06-09 21:05:02.081256] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:34.007 21:05:02 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:22:34.007 21:05:02 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 126700 0 00:22:34.007 21:05:02 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 126700 0 busy 00:22:34.007 21:05:02 -- interrupt/interrupt_common.sh@33 -- # local pid=126700 00:22:34.007 21:05:02 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:22:34.008 21:05:02 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:22:34.008 21:05:02 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:22:34.008 21:05:02 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:34.008 21:05:02 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:34.008 21:05:02 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:34.008 21:05:02 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126700 -w 256 00:22:34.008 21:05:02 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126700 root 20 0 20.1t 145952 28852 R 93.3 1.2 0:01.14 reactor_0' 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # echo 126700 root 20 0 20.1t 145952 28852 R 93.3 1.2 0:01.14 reactor_0 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.3 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:34.266 21:05:02 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:22:34.266 21:05:02 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 126700 2 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 126700 2 busy 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@33 -- # local pid=126700 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126700 -w 256 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126705 root 20 0 20.1t 145952 28852 R 99.9 1.2 0:00.34 reactor_2' 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # echo 126705 root 20 0 20.1t 145952 28852 R 99.9 1.2 0:00.34 reactor_2 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:22:34.266 21:05:02 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:34.266 21:05:02 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:22:34.524 [2024-06-09 21:05:02.648810] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:22:34.524 [2024-06-09 21:05:02.649242] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:34.524 21:05:02 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:22:34.524 21:05:02 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 126700 2 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126700 2 idle 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@33 -- # local pid=126700 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126700 -w 256 00:22:34.524 21:05:02 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:22:34.782 21:05:02 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126705 root 20 0 20.1t 146016 28852 S 0.0 1.2 0:00.56 reactor_2' 00:22:34.782 21:05:02 -- interrupt/interrupt_common.sh@48 -- # echo 126705 root 20 0 20.1t 146016 28852 S 0.0 1.2 0:00.56 reactor_2 00:22:34.782 21:05:02 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:34.783 21:05:02 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:34.783 21:05:02 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:34.783 21:05:02 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:34.783 21:05:02 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:34.783 21:05:02 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:34.783 21:05:02 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:34.783 21:05:02 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:34.783 21:05:02 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:22:35.041 [2024-06-09 21:05:03.012798] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:22:35.041 [2024-06-09 21:05:03.013214] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:35.041 21:05:03 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:22:35.041 21:05:03 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:22:35.041 21:05:03 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:22:35.299 [2024-06-09 21:05:03.265100] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:35.299 21:05:03 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 126700 0 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126700 0 idle 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@33 -- # local pid=126700 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126700 -w 256 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126700 root 20 0 20.1t 146108 28852 S 6.7 1.2 0:01.91 reactor_0' 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@48 -- # echo 126700 root 20 0 20.1t 146108 28852 S 6.7 1.2 0:01.91 reactor_0 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:22:35.299 21:05:03 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:35.299 21:05:03 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:22:35.299 21:05:03 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:22:35.299 21:05:03 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:22:35.299 21:05:03 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 126700 00:22:35.299 21:05:03 -- common/autotest_common.sh@926 -- # '[' -z 126700 ']' 00:22:35.299 21:05:03 -- common/autotest_common.sh@930 -- # kill -0 126700 00:22:35.300 21:05:03 -- common/autotest_common.sh@931 -- # uname 00:22:35.300 21:05:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:35.300 21:05:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126700 00:22:35.300 21:05:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:35.300 killing process with pid 126700 00:22:35.300 21:05:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:35.300 21:05:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126700' 00:22:35.300 21:05:03 -- common/autotest_common.sh@945 -- # kill 126700 00:22:35.300 21:05:03 -- common/autotest_common.sh@950 -- # wait 126700 00:22:36.676 21:05:04 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:22:36.676 21:05:04 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:22:36.676 21:05:04 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:22:36.676 21:05:04 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.676 21:05:04 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:22:36.676 21:05:04 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=126853 00:22:36.676 21:05:04 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:22:36.676 21:05:04 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.676 21:05:04 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 126853 /var/tmp/spdk.sock 00:22:36.676 21:05:04 -- common/autotest_common.sh@819 -- # '[' -z 126853 ']' 00:22:36.676 21:05:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.676 21:05:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:36.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.676 21:05:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.676 21:05:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:36.676 21:05:04 -- common/autotest_common.sh@10 -- # set +x 00:22:36.676 [2024-06-09 21:05:04.808975] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:36.676 [2024-06-09 21:05:04.809149] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126853 ] 00:22:36.934 [2024-06-09 21:05:04.970153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:37.192 [2024-06-09 21:05:05.134489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.192 [2024-06-09 21:05:05.134635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.192 [2024-06-09 21:05:05.134627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.450 [2024-06-09 21:05:05.378458] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:37.708 21:05:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:37.708 21:05:05 -- common/autotest_common.sh@852 -- # return 0 00:22:37.708 21:05:05 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:22:37.708 21:05:05 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:37.967 Malloc0 00:22:37.967 Malloc1 00:22:37.967 Malloc2 00:22:37.967 21:05:06 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:22:37.967 21:05:06 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:22:37.967 21:05:06 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:22:37.967 21:05:06 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:22:37.967 5000+0 records in 00:22:37.967 5000+0 records out 00:22:37.967 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0187549 s, 546 MB/s 00:22:37.967 21:05:06 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:22:38.226 AIO0 00:22:38.226 21:05:06 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 126853 00:22:38.226 21:05:06 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 126853 00:22:38.226 21:05:06 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=126853 00:22:38.226 21:05:06 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:22:38.226 21:05:06 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:22:38.226 21:05:06 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:22:38.226 21:05:06 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:22:38.226 21:05:06 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:22:38.226 21:05:06 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:22:38.226 21:05:06 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:38.226 21:05:06 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:22:38.226 21:05:06 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:38.484 21:05:06 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:22:38.485 21:05:06 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:22:38.485 21:05:06 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:22:38.485 spdk_thread ids are 1 on reactor0. 00:22:38.485 21:05:06 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:22:38.485 21:05:06 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:22:38.485 21:05:06 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:22:38.485 21:05:06 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 126853 0 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126853 0 idle 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@33 -- # local pid=126853 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126853 -w 256 00:22:38.485 21:05:06 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:22:38.743 21:05:06 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126853 root 20 0 20.1t 145376 28440 S 0.0 1.2 0:00.61 reactor_0' 00:22:38.743 21:05:06 -- interrupt/interrupt_common.sh@48 -- # echo 126853 root 20 0 20.1t 145376 28440 S 0.0 1.2 0:00.61 reactor_0 00:22:38.743 21:05:06 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:38.743 21:05:06 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:38.743 21:05:06 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:38.744 21:05:06 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:22:38.744 21:05:06 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 126853 1 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126853 1 idle 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@33 -- # local pid=126853 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126853 -w 256 00:22:38.744 21:05:06 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:22:39.002 21:05:06 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126856 root 20 0 20.1t 145376 28440 S 0.0 1.2 0:00.00 reactor_1' 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@48 -- # echo 126856 root 20 0 20.1t 145376 28440 S 0.0 1.2 0:00.00 reactor_1 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:39.003 21:05:06 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:22:39.003 21:05:06 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 126853 2 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126853 2 idle 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@33 -- # local pid=126853 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126853 -w 256 00:22:39.003 21:05:06 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126857 root 20 0 20.1t 145376 28440 S 0.0 1.2 0:00.00 reactor_2' 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@48 -- # echo 126857 root 20 0 20.1t 145376 28440 S 0.0 1.2 0:00.00 reactor_2 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:39.003 21:05:07 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:39.003 21:05:07 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:22:39.003 21:05:07 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:22:39.262 [2024-06-09 21:05:07.338811] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:22:39.262 [2024-06-09 21:05:07.339096] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:22:39.262 [2024-06-09 21:05:07.339347] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:39.262 21:05:07 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:22:39.521 [2024-06-09 21:05:07.650605] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:22:39.521 [2024-06-09 21:05:07.651045] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:39.521 21:05:07 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:22:39.521 21:05:07 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 126853 0 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 126853 0 busy 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@33 -- # local pid=126853 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126853 -w 256 00:22:39.521 21:05:07 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126853 root 20 0 20.1t 145452 28440 R 99.9 1.2 0:01.11 reactor_0' 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@48 -- # echo 126853 root 20 0 20.1t 145452 28440 R 99.9 1.2 0:01.11 reactor_0 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:39.780 21:05:07 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:22:39.780 21:05:07 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 126853 2 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 126853 2 busy 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@33 -- # local pid=126853 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126853 -w 256 00:22:39.780 21:05:07 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126857 root 20 0 20.1t 145452 28440 R 99.9 1.2 0:00.35 reactor_2' 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@48 -- # echo 126857 root 20 0 20.1t 145452 28440 R 99.9 1.2 0:00.35 reactor_2 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:22:40.058 21:05:08 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:40.058 21:05:08 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:22:40.316 [2024-06-09 21:05:08.242809] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:22:40.316 [2024-06-09 21:05:08.243124] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:40.316 21:05:08 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:22:40.316 21:05:08 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 126853 2 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126853 2 idle 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@33 -- # local pid=126853 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126853 -w 256 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126857 root 20 0 20.1t 145512 28440 S 0.0 1.2 0:00.59 reactor_2' 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@48 -- # echo 126857 root 20 0 20.1t 145512 28440 S 0.0 1.2 0:00.59 reactor_2 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:40.316 21:05:08 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:40.316 21:05:08 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:22:40.575 [2024-06-09 21:05:08.602824] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:22:40.575 [2024-06-09 21:05:08.603215] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:22:40.575 [2024-06-09 21:05:08.603254] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:22:40.575 21:05:08 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:22:40.575 21:05:08 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 126853 0 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 126853 0 idle 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@33 -- # local pid=126853 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@41 -- # hash top 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:22:40.575 21:05:08 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 126853 -w 256 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 126853 root 20 0 20.1t 145556 28440 S 0.0 1.2 0:01.88 reactor_0' 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@48 -- # echo 126853 root 20 0 20.1t 145556 28440 S 0.0 1.2 0:01.88 reactor_0 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:22:40.834 21:05:08 -- interrupt/interrupt_common.sh@56 -- # return 0 00:22:40.834 21:05:08 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:22:40.834 21:05:08 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:22:40.834 21:05:08 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:40.834 21:05:08 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 126853 00:22:40.834 21:05:08 -- common/autotest_common.sh@926 -- # '[' -z 126853 ']' 00:22:40.834 21:05:08 -- common/autotest_common.sh@930 -- # kill -0 126853 00:22:40.834 21:05:08 -- common/autotest_common.sh@931 -- # uname 00:22:40.834 21:05:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:40.834 21:05:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126853 00:22:40.834 killing process with pid 126853 00:22:40.834 21:05:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:40.834 21:05:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:40.834 21:05:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126853' 00:22:40.834 21:05:08 -- common/autotest_common.sh@945 -- # kill 126853 00:22:40.834 21:05:08 -- common/autotest_common.sh@950 -- # wait 126853 00:22:41.770 21:05:09 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:22:41.770 21:05:09 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:22:41.770 00:22:41.770 real 0m11.377s 00:22:41.770 user 0m11.657s 00:22:41.770 sys 0m1.573s 00:22:41.770 21:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.770 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:41.770 ************************************ 00:22:41.770 END TEST reactor_set_interrupt 00:22:41.770 ************************************ 00:22:42.030 21:05:09 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:22:42.030 21:05:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:42.030 21:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:42.030 21:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:42.030 ************************************ 00:22:42.030 START TEST reap_unregistered_poller 00:22:42.030 ************************************ 00:22:42.030 21:05:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:22:42.030 * Looking for test storage... 00:22:42.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.030 21:05:10 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:22:42.030 21:05:10 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:22:42.030 21:05:10 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.030 21:05:10 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.030 21:05:10 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:22:42.030 21:05:10 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:42.030 21:05:10 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:22:42.030 21:05:10 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:22:42.030 21:05:10 -- common/autotest_common.sh@34 -- # set -e 00:22:42.030 21:05:10 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:22:42.030 21:05:10 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:22:42.030 21:05:10 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:22:42.030 21:05:10 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:22:42.030 21:05:10 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:22:42.030 21:05:10 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:22:42.030 21:05:10 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:22:42.030 21:05:10 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:22:42.030 21:05:10 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:22:42.030 21:05:10 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:22:42.030 21:05:10 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:22:42.030 21:05:10 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:22:42.030 21:05:10 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:22:42.030 21:05:10 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:22:42.030 21:05:10 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:22:42.030 21:05:10 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:22:42.030 21:05:10 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:22:42.030 21:05:10 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:22:42.030 21:05:10 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:22:42.030 21:05:10 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:22:42.030 21:05:10 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:22:42.030 21:05:10 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:22:42.030 21:05:10 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:42.030 21:05:10 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:22:42.030 21:05:10 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:22:42.030 21:05:10 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:22:42.030 21:05:10 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:22:42.030 21:05:10 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:22:42.030 21:05:10 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:22:42.030 21:05:10 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:22:42.030 21:05:10 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:22:42.030 21:05:10 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:22:42.030 21:05:10 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:22:42.030 21:05:10 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:22:42.030 21:05:10 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:22:42.030 21:05:10 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:22:42.030 21:05:10 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:22:42.030 21:05:10 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:22:42.030 21:05:10 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:22:42.030 21:05:10 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:22:42.030 21:05:10 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:22:42.030 21:05:10 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:22:42.030 21:05:10 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:22:42.030 21:05:10 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:22:42.030 21:05:10 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:22:42.030 21:05:10 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:22:42.030 21:05:10 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:22:42.030 21:05:10 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:22:42.030 21:05:10 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:22:42.030 21:05:10 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:22:42.030 21:05:10 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:22:42.030 21:05:10 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:22:42.030 21:05:10 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:22:42.030 21:05:10 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:22:42.030 21:05:10 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:22:42.030 21:05:10 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:22:42.030 21:05:10 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:22:42.030 21:05:10 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:22:42.030 21:05:10 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:22:42.030 21:05:10 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:22:42.030 21:05:10 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:22:42.030 21:05:10 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:22:42.030 21:05:10 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:22:42.030 21:05:10 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:22:42.030 21:05:10 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:22:42.030 21:05:10 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:22:42.030 21:05:10 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:22:42.030 21:05:10 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:22:42.030 21:05:10 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:22:42.030 21:05:10 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:22:42.030 21:05:10 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:22:42.030 21:05:10 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:22:42.030 21:05:10 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:22:42.030 21:05:10 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:22:42.030 21:05:10 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:22:42.030 21:05:10 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:22:42.030 21:05:10 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:22:42.030 21:05:10 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:22:42.030 21:05:10 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:22:42.030 21:05:10 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:22:42.030 21:05:10 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:22:42.031 21:05:10 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:22:42.031 21:05:10 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:22:42.031 21:05:10 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:22:42.031 21:05:10 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:22:42.031 21:05:10 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:22:42.031 21:05:10 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:22:42.031 21:05:10 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:22:42.031 21:05:10 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:22:42.031 21:05:10 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:22:42.031 21:05:10 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:22:42.031 21:05:10 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:22:42.031 21:05:10 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:22:42.031 21:05:10 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:22:42.031 21:05:10 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:22:42.031 21:05:10 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:22:42.031 21:05:10 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:22:42.031 21:05:10 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:22:42.031 21:05:10 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:22:42.031 #define SPDK_CONFIG_H 00:22:42.031 #define SPDK_CONFIG_APPS 1 00:22:42.031 #define SPDK_CONFIG_ARCH native 00:22:42.031 #define SPDK_CONFIG_ASAN 1 00:22:42.031 #undef SPDK_CONFIG_AVAHI 00:22:42.031 #undef SPDK_CONFIG_CET 00:22:42.031 #define SPDK_CONFIG_COVERAGE 1 00:22:42.031 #define SPDK_CONFIG_CROSS_PREFIX 00:22:42.031 #undef SPDK_CONFIG_CRYPTO 00:22:42.031 #undef SPDK_CONFIG_CRYPTO_MLX5 00:22:42.031 #undef SPDK_CONFIG_CUSTOMOCF 00:22:42.031 #undef SPDK_CONFIG_DAOS 00:22:42.031 #define SPDK_CONFIG_DAOS_DIR 00:22:42.031 #define SPDK_CONFIG_DEBUG 1 00:22:42.031 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:22:42.031 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:42.031 #define SPDK_CONFIG_DPDK_INC_DIR 00:22:42.031 #define SPDK_CONFIG_DPDK_LIB_DIR 00:22:42.031 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:22:42.031 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:42.031 #define SPDK_CONFIG_EXAMPLES 1 00:22:42.031 #undef SPDK_CONFIG_FC 00:22:42.031 #define SPDK_CONFIG_FC_PATH 00:22:42.031 #define SPDK_CONFIG_FIO_PLUGIN 1 00:22:42.031 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:22:42.031 #undef SPDK_CONFIG_FUSE 00:22:42.031 #undef SPDK_CONFIG_FUZZER 00:22:42.031 #define SPDK_CONFIG_FUZZER_LIB 00:22:42.031 #undef SPDK_CONFIG_GOLANG 00:22:42.031 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:22:42.031 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:22:42.031 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:22:42.031 #undef SPDK_CONFIG_HAVE_LIBBSD 00:22:42.031 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:22:42.031 #define SPDK_CONFIG_IDXD 1 00:22:42.031 #undef SPDK_CONFIG_IDXD_KERNEL 00:22:42.031 #undef SPDK_CONFIG_IPSEC_MB 00:22:42.031 #define SPDK_CONFIG_IPSEC_MB_DIR 00:22:42.031 #define SPDK_CONFIG_ISAL 1 00:22:42.031 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:22:42.031 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:22:42.031 #define SPDK_CONFIG_LIBDIR 00:22:42.031 #undef SPDK_CONFIG_LTO 00:22:42.031 #define SPDK_CONFIG_MAX_LCORES 00:22:42.031 #define SPDK_CONFIG_NVME_CUSE 1 00:22:42.031 #undef SPDK_CONFIG_OCF 00:22:42.031 #define SPDK_CONFIG_OCF_PATH 00:22:42.031 #define SPDK_CONFIG_OPENSSL_PATH 00:22:42.031 #undef SPDK_CONFIG_PGO_CAPTURE 00:22:42.031 #undef SPDK_CONFIG_PGO_USE 00:22:42.031 #define SPDK_CONFIG_PREFIX /usr/local 00:22:42.031 #undef SPDK_CONFIG_RAID5F 00:22:42.031 #undef SPDK_CONFIG_RBD 00:22:42.031 #define SPDK_CONFIG_RDMA 1 00:22:42.031 #define SPDK_CONFIG_RDMA_PROV verbs 00:22:42.031 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:22:42.031 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:22:42.031 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:22:42.031 #undef SPDK_CONFIG_SHARED 00:22:42.031 #undef SPDK_CONFIG_SMA 00:22:42.031 #define SPDK_CONFIG_TESTS 1 00:22:42.031 #undef SPDK_CONFIG_TSAN 00:22:42.031 #undef SPDK_CONFIG_UBLK 00:22:42.031 #define SPDK_CONFIG_UBSAN 1 00:22:42.031 #define SPDK_CONFIG_UNIT_TESTS 1 00:22:42.031 #undef SPDK_CONFIG_URING 00:22:42.031 #define SPDK_CONFIG_URING_PATH 00:22:42.031 #undef SPDK_CONFIG_URING_ZNS 00:22:42.031 #undef SPDK_CONFIG_USDT 00:22:42.031 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:22:42.031 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:22:42.031 #undef SPDK_CONFIG_VFIO_USER 00:22:42.031 #define SPDK_CONFIG_VFIO_USER_DIR 00:22:42.031 #define SPDK_CONFIG_VHOST 1 00:22:42.031 #define SPDK_CONFIG_VIRTIO 1 00:22:42.031 #undef SPDK_CONFIG_VTUNE 00:22:42.031 #define SPDK_CONFIG_VTUNE_DIR 00:22:42.031 #define SPDK_CONFIG_WERROR 1 00:22:42.031 #define SPDK_CONFIG_WPDK_DIR 00:22:42.031 #undef SPDK_CONFIG_XNVME 00:22:42.031 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:22:42.031 21:05:10 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:22:42.031 21:05:10 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:42.031 21:05:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.031 21:05:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.031 21:05:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.031 21:05:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:42.031 21:05:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:42.031 21:05:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:42.031 21:05:10 -- paths/export.sh@5 -- # export PATH 00:22:42.031 21:05:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:42.031 21:05:10 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:22:42.031 21:05:10 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:22:42.031 21:05:10 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:22:42.031 21:05:10 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:22:42.031 21:05:10 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:22:42.031 21:05:10 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:22:42.031 21:05:10 -- pm/common@16 -- # TEST_TAG=N/A 00:22:42.031 21:05:10 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:22:42.031 21:05:10 -- common/autotest_common.sh@52 -- # : 1 00:22:42.031 21:05:10 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:22:42.031 21:05:10 -- common/autotest_common.sh@56 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:22:42.031 21:05:10 -- common/autotest_common.sh@58 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:22:42.031 21:05:10 -- common/autotest_common.sh@60 -- # : 1 00:22:42.031 21:05:10 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:22:42.031 21:05:10 -- common/autotest_common.sh@62 -- # : 1 00:22:42.031 21:05:10 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:22:42.031 21:05:10 -- common/autotest_common.sh@64 -- # : 00:22:42.031 21:05:10 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:22:42.031 21:05:10 -- common/autotest_common.sh@66 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:22:42.031 21:05:10 -- common/autotest_common.sh@68 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:22:42.031 21:05:10 -- common/autotest_common.sh@70 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:22:42.031 21:05:10 -- common/autotest_common.sh@72 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:22:42.031 21:05:10 -- common/autotest_common.sh@74 -- # : 1 00:22:42.031 21:05:10 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:22:42.031 21:05:10 -- common/autotest_common.sh@76 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:22:42.031 21:05:10 -- common/autotest_common.sh@78 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:22:42.031 21:05:10 -- common/autotest_common.sh@80 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:22:42.031 21:05:10 -- common/autotest_common.sh@82 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:22:42.031 21:05:10 -- common/autotest_common.sh@84 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:22:42.031 21:05:10 -- common/autotest_common.sh@86 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:22:42.031 21:05:10 -- common/autotest_common.sh@88 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:22:42.031 21:05:10 -- common/autotest_common.sh@90 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:22:42.031 21:05:10 -- common/autotest_common.sh@92 -- # : 0 00:22:42.031 21:05:10 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:22:42.032 21:05:10 -- common/autotest_common.sh@94 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:22:42.032 21:05:10 -- common/autotest_common.sh@96 -- # : rdma 00:22:42.032 21:05:10 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:22:42.032 21:05:10 -- common/autotest_common.sh@98 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:22:42.032 21:05:10 -- common/autotest_common.sh@100 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:22:42.032 21:05:10 -- common/autotest_common.sh@102 -- # : 1 00:22:42.032 21:05:10 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:22:42.032 21:05:10 -- common/autotest_common.sh@104 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:22:42.032 21:05:10 -- common/autotest_common.sh@106 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:22:42.032 21:05:10 -- common/autotest_common.sh@108 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:22:42.032 21:05:10 -- common/autotest_common.sh@110 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:22:42.032 21:05:10 -- common/autotest_common.sh@112 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:22:42.032 21:05:10 -- common/autotest_common.sh@114 -- # : 1 00:22:42.032 21:05:10 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:22:42.032 21:05:10 -- common/autotest_common.sh@116 -- # : 1 00:22:42.032 21:05:10 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:22:42.032 21:05:10 -- common/autotest_common.sh@118 -- # : 00:22:42.032 21:05:10 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:22:42.032 21:05:10 -- common/autotest_common.sh@120 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:22:42.032 21:05:10 -- common/autotest_common.sh@122 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:22:42.032 21:05:10 -- common/autotest_common.sh@124 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:22:42.032 21:05:10 -- common/autotest_common.sh@126 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:22:42.032 21:05:10 -- common/autotest_common.sh@128 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:22:42.032 21:05:10 -- common/autotest_common.sh@130 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:22:42.032 21:05:10 -- common/autotest_common.sh@132 -- # : 00:22:42.032 21:05:10 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:22:42.032 21:05:10 -- common/autotest_common.sh@134 -- # : true 00:22:42.032 21:05:10 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:22:42.032 21:05:10 -- common/autotest_common.sh@136 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:22:42.032 21:05:10 -- common/autotest_common.sh@138 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:22:42.032 21:05:10 -- common/autotest_common.sh@140 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:22:42.032 21:05:10 -- common/autotest_common.sh@142 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:22:42.032 21:05:10 -- common/autotest_common.sh@144 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:22:42.032 21:05:10 -- common/autotest_common.sh@146 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:22:42.032 21:05:10 -- common/autotest_common.sh@148 -- # : 00:22:42.032 21:05:10 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:22:42.032 21:05:10 -- common/autotest_common.sh@150 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:22:42.032 21:05:10 -- common/autotest_common.sh@152 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:22:42.032 21:05:10 -- common/autotest_common.sh@154 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:22:42.032 21:05:10 -- common/autotest_common.sh@156 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:22:42.032 21:05:10 -- common/autotest_common.sh@158 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:22:42.032 21:05:10 -- common/autotest_common.sh@160 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:22:42.032 21:05:10 -- common/autotest_common.sh@163 -- # : 00:22:42.032 21:05:10 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:22:42.032 21:05:10 -- common/autotest_common.sh@165 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:22:42.032 21:05:10 -- common/autotest_common.sh@167 -- # : 0 00:22:42.032 21:05:10 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:22:42.032 21:05:10 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:22:42.032 21:05:10 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:22:42.032 21:05:10 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:22:42.032 21:05:10 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:42.032 21:05:10 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:42.032 21:05:10 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:22:42.032 21:05:10 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:22:42.032 21:05:10 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:42.032 21:05:10 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:22:42.032 21:05:10 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:42.032 21:05:10 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:22:42.032 21:05:10 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:22:42.032 21:05:10 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:22:42.032 21:05:10 -- common/autotest_common.sh@196 -- # cat 00:22:42.032 21:05:10 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:22:42.032 21:05:10 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:42.032 21:05:10 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:22:42.032 21:05:10 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:42.032 21:05:10 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:22:42.032 21:05:10 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:22:42.032 21:05:10 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:22:42.032 21:05:10 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:22:42.032 21:05:10 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:22:42.032 21:05:10 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:22:42.032 21:05:10 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:22:42.032 21:05:10 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:22:42.032 21:05:10 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:22:42.032 21:05:10 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:22:42.032 21:05:10 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:22:42.032 21:05:10 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:22:42.032 21:05:10 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:22:42.032 21:05:10 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:42.032 21:05:10 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:42.032 21:05:10 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:22:42.032 21:05:10 -- common/autotest_common.sh@249 -- # export valgrind= 00:22:42.032 21:05:10 -- common/autotest_common.sh@249 -- # valgrind= 00:22:42.032 21:05:10 -- common/autotest_common.sh@255 -- # uname -s 00:22:42.032 21:05:10 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:22:42.032 21:05:10 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:22:42.032 21:05:10 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:22:42.032 21:05:10 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:22:42.032 21:05:10 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:22:42.032 21:05:10 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:22:42.032 21:05:10 -- common/autotest_common.sh@265 -- # MAKE=make 00:22:42.032 21:05:10 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:22:42.032 21:05:10 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:22:42.032 21:05:10 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:22:42.033 21:05:10 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:22:42.033 21:05:10 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:22:42.033 21:05:10 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:22:42.033 21:05:10 -- common/autotest_common.sh@309 -- # [[ -z 127021 ]] 00:22:42.033 21:05:10 -- common/autotest_common.sh@309 -- # kill -0 127021 00:22:42.033 21:05:10 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:22:42.033 21:05:10 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:22:42.033 21:05:10 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:22:42.033 21:05:10 -- common/autotest_common.sh@322 -- # local mount target_dir 00:22:42.033 21:05:10 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:22:42.033 21:05:10 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:22:42.033 21:05:10 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:22:42.033 21:05:10 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:22:42.033 21:05:10 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.2Axch0 00:22:42.033 21:05:10 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:22:42.033 21:05:10 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:22:42.033 21:05:10 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:22:42.033 21:05:10 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.2Axch0/tests/interrupt /tmp/spdk.2Axch0 00:22:42.033 21:05:10 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:22:42.033 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.033 21:05:10 -- common/autotest_common.sh@318 -- # df -T 00:22:42.033 21:05:10 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:22:42.033 21:05:10 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:42.033 21:05:10 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:42.033 21:05:10 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:22:42.033 21:05:10 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:22:42.033 21:05:10 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:22:42.033 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.033 21:05:10 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:22:42.033 21:05:10 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:22:42.033 21:05:10 -- common/autotest_common.sh@353 -- # avails["$mount"]=10274344960 00:22:42.033 21:05:10 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:22:42.033 21:05:10 -- common/autotest_common.sh@354 -- # uses["$mount"]=10325671936 00:22:42.033 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.033 21:05:10 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:42.033 21:05:10 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:42.033 21:05:10 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265802752 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268395520 00:22:42.292 21:05:10 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:22:42.292 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:22:42.292 21:05:10 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:22:42.292 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:22:42.292 21:05:10 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:22:42.292 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:22:42.292 21:05:10 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:22:42.292 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:22:42.292 21:05:10 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # avails["$mount"]=97190895616 00:22:42.292 21:05:10 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:22:42.292 21:05:10 -- common/autotest_common.sh@354 -- # uses["$mount"]=2511884288 00:22:42.292 21:05:10 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:22:42.292 21:05:10 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:22:42.292 * Looking for test storage... 00:22:42.292 21:05:10 -- common/autotest_common.sh@359 -- # local target_space new_size 00:22:42.292 21:05:10 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:22:42.292 21:05:10 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.292 21:05:10 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:22:42.292 21:05:10 -- common/autotest_common.sh@363 -- # mount=/ 00:22:42.292 21:05:10 -- common/autotest_common.sh@365 -- # target_space=10274344960 00:22:42.292 21:05:10 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:22:42.292 21:05:10 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:22:42.292 21:05:10 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:22:42.292 21:05:10 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:22:42.292 21:05:10 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:22:42.292 21:05:10 -- common/autotest_common.sh@372 -- # new_size=12540264448 00:22:42.292 21:05:10 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:22:42.292 21:05:10 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.292 21:05:10 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.292 21:05:10 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:22:42.292 21:05:10 -- common/autotest_common.sh@380 -- # return 0 00:22:42.292 21:05:10 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:22:42.292 21:05:10 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:22:42.292 21:05:10 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:22:42.292 21:05:10 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:22:42.292 21:05:10 -- common/autotest_common.sh@1672 -- # true 00:22:42.292 21:05:10 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:22:42.292 21:05:10 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:22:42.292 21:05:10 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:22:42.292 21:05:10 -- common/autotest_common.sh@27 -- # exec 00:22:42.292 21:05:10 -- common/autotest_common.sh@29 -- # exec 00:22:42.292 21:05:10 -- common/autotest_common.sh@31 -- # xtrace_restore 00:22:42.292 21:05:10 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:22:42.292 21:05:10 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:22:42.292 21:05:10 -- common/autotest_common.sh@18 -- # set -x 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:22:42.292 21:05:10 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:22:42.292 21:05:10 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:22:42.292 21:05:10 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=127067 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:22:42.292 21:05:10 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 127067 /var/tmp/spdk.sock 00:22:42.292 21:05:10 -- common/autotest_common.sh@819 -- # '[' -z 127067 ']' 00:22:42.292 21:05:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.292 21:05:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:42.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.292 21:05:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.292 21:05:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:42.292 21:05:10 -- common/autotest_common.sh@10 -- # set +x 00:22:42.292 [2024-06-09 21:05:10.278564] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:42.292 [2024-06-09 21:05:10.278842] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127067 ] 00:22:42.292 [2024-06-09 21:05:10.460113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:42.551 [2024-06-09 21:05:10.627967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.551 [2024-06-09 21:05:10.628108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.551 [2024-06-09 21:05:10.628111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.810 [2024-06-09 21:05:10.883707] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:43.069 21:05:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:43.069 21:05:11 -- common/autotest_common.sh@852 -- # return 0 00:22:43.069 21:05:11 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:22:43.069 21:05:11 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:22:43.069 21:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.069 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:22:43.069 21:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.069 21:05:11 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:22:43.069 "name": "app_thread", 00:22:43.069 "id": 1, 00:22:43.069 "active_pollers": [], 00:22:43.069 "timed_pollers": [ 00:22:43.069 { 00:22:43.069 "name": "rpc_subsystem_poll", 00:22:43.069 "id": 1, 00:22:43.069 "state": "waiting", 00:22:43.069 "run_count": 0, 00:22:43.069 "busy_count": 0, 00:22:43.069 "period_ticks": 8800000 00:22:43.069 } 00:22:43.069 ], 00:22:43.069 "paused_pollers": [] 00:22:43.069 }' 00:22:43.069 21:05:11 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:22:43.328 21:05:11 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:22:43.328 21:05:11 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:22:43.328 21:05:11 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:22:43.328 21:05:11 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:22:43.328 21:05:11 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:22:43.328 21:05:11 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:22:43.328 21:05:11 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:22:43.328 21:05:11 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:22:43.328 5000+0 records in 00:22:43.328 5000+0 records out 00:22:43.328 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0242468 s, 422 MB/s 00:22:43.328 21:05:11 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:22:43.586 AIO0 00:22:43.586 21:05:11 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:43.845 21:05:11 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:22:43.845 21:05:11 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:22:43.845 21:05:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.845 21:05:11 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:22:43.845 21:05:11 -- common/autotest_common.sh@10 -- # set +x 00:22:43.845 21:05:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.845 21:05:11 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:22:43.845 "name": "app_thread", 00:22:43.845 "id": 1, 00:22:43.845 "active_pollers": [], 00:22:43.845 "timed_pollers": [ 00:22:43.845 { 00:22:43.845 "name": "rpc_subsystem_poll", 00:22:43.845 "id": 1, 00:22:43.845 "state": "waiting", 00:22:43.845 "run_count": 0, 00:22:43.845 "busy_count": 0, 00:22:43.845 "period_ticks": 8800000 00:22:43.845 } 00:22:43.845 ], 00:22:43.845 "paused_pollers": [] 00:22:43.845 }' 00:22:43.845 21:05:11 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:22:44.102 21:05:12 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:22:44.102 21:05:12 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:22:44.102 21:05:12 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:22:44.102 21:05:12 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:22:44.102 21:05:12 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:22:44.102 21:05:12 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:22:44.102 21:05:12 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 127067 00:22:44.102 21:05:12 -- common/autotest_common.sh@926 -- # '[' -z 127067 ']' 00:22:44.102 21:05:12 -- common/autotest_common.sh@930 -- # kill -0 127067 00:22:44.102 21:05:12 -- common/autotest_common.sh@931 -- # uname 00:22:44.102 21:05:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.102 21:05:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127067 00:22:44.102 21:05:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:44.102 21:05:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:44.102 killing process with pid 127067 00:22:44.102 21:05:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127067' 00:22:44.102 21:05:12 -- common/autotest_common.sh@945 -- # kill 127067 00:22:44.102 21:05:12 -- common/autotest_common.sh@950 -- # wait 127067 00:22:45.036 21:05:13 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:22:45.036 21:05:13 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:22:45.036 00:22:45.036 real 0m3.151s 00:22:45.036 user 0m2.524s 00:22:45.036 sys 0m0.572s 00:22:45.036 21:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.036 21:05:13 -- common/autotest_common.sh@10 -- # set +x 00:22:45.036 ************************************ 00:22:45.036 END TEST reap_unregistered_poller 00:22:45.036 ************************************ 00:22:45.036 21:05:13 -- spdk/autotest.sh@204 -- # uname -s 00:22:45.036 21:05:13 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:22:45.036 21:05:13 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:22:45.036 21:05:13 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:22:45.036 21:05:13 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:22:45.036 21:05:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:45.036 21:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:45.036 21:05:13 -- common/autotest_common.sh@10 -- # set +x 00:22:45.036 ************************************ 00:22:45.036 START TEST spdk_dd 00:22:45.036 ************************************ 00:22:45.036 21:05:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:22:45.295 * Looking for test storage... 00:22:45.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:45.295 21:05:13 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:45.295 21:05:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.295 21:05:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.295 21:05:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.295 21:05:13 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:45.295 21:05:13 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:45.295 21:05:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:45.295 21:05:13 -- paths/export.sh@5 -- # export PATH 00:22:45.295 21:05:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:45.295 21:05:13 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:45.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:22:45.554 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:46.490 21:05:14 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:22:46.490 21:05:14 -- dd/dd.sh@11 -- # nvme_in_userspace 00:22:46.490 21:05:14 -- scripts/common.sh@311 -- # local bdf bdfs 00:22:46.490 21:05:14 -- scripts/common.sh@312 -- # local nvmes 00:22:46.490 21:05:14 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:22:46.490 21:05:14 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:46.490 21:05:14 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:22:46.490 21:05:14 -- scripts/common.sh@297 -- # local bdf= 00:22:46.490 21:05:14 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:22:46.490 21:05:14 -- scripts/common.sh@232 -- # local class 00:22:46.490 21:05:14 -- scripts/common.sh@233 -- # local subclass 00:22:46.490 21:05:14 -- scripts/common.sh@234 -- # local progif 00:22:46.490 21:05:14 -- scripts/common.sh@235 -- # printf %02x 1 00:22:46.490 21:05:14 -- scripts/common.sh@235 -- # class=01 00:22:46.490 21:05:14 -- scripts/common.sh@236 -- # printf %02x 8 00:22:46.490 21:05:14 -- scripts/common.sh@236 -- # subclass=08 00:22:46.490 21:05:14 -- scripts/common.sh@237 -- # printf %02x 2 00:22:46.490 21:05:14 -- scripts/common.sh@237 -- # progif=02 00:22:46.490 21:05:14 -- scripts/common.sh@239 -- # hash lspci 00:22:46.490 21:05:14 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:22:46.490 21:05:14 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:22:46.490 21:05:14 -- scripts/common.sh@242 -- # grep -i -- -p02 00:22:46.490 21:05:14 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:46.490 21:05:14 -- scripts/common.sh@244 -- # tr -d '"' 00:22:46.490 21:05:14 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:46.490 21:05:14 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:22:46.490 21:05:14 -- scripts/common.sh@15 -- # local i 00:22:46.490 21:05:14 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:22:46.490 21:05:14 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:22:46.490 21:05:14 -- scripts/common.sh@24 -- # return 0 00:22:46.490 21:05:14 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:22:46.490 21:05:14 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:22:46.490 21:05:14 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:22:46.490 21:05:14 -- scripts/common.sh@322 -- # uname -s 00:22:46.490 21:05:14 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:22:46.490 21:05:14 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:22:46.490 21:05:14 -- scripts/common.sh@327 -- # (( 1 )) 00:22:46.490 21:05:14 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:22:46.490 21:05:14 -- dd/dd.sh@13 -- # check_liburing 00:22:46.490 21:05:14 -- dd/common.sh@139 -- # local lib so 00:22:46.490 21:05:14 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:22:46.490 21:05:14 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.490 21:05:14 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:22:46.490 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:22:46.491 21:05:14 -- dd/common.sh@142 -- # read -r lib _ so _ 00:22:46.491 21:05:14 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:22:46.491 21:05:14 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:22:46.491 21:05:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:46.491 21:05:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:46.491 21:05:14 -- common/autotest_common.sh@10 -- # set +x 00:22:46.491 ************************************ 00:22:46.491 START TEST spdk_dd_basic_rw 00:22:46.491 ************************************ 00:22:46.491 21:05:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:22:46.491 * Looking for test storage... 00:22:46.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:22:46.750 21:05:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.750 21:05:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.750 21:05:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.750 21:05:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.750 21:05:14 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:46.750 21:05:14 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:46.750 21:05:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:46.750 21:05:14 -- paths/export.sh@5 -- # export PATH 00:22:46.750 21:05:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:22:46.750 21:05:14 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:22:46.750 21:05:14 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:22:46.750 21:05:14 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:22:46.750 21:05:14 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:22:46.750 21:05:14 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:22:46.750 21:05:14 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:22:46.750 21:05:14 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:22:46.750 21:05:14 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:22:46.750 21:05:14 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:46.750 21:05:14 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:22:46.750 21:05:14 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:22:46.750 21:05:14 -- dd/common.sh@126 -- # mapfile -t id 00:22:46.750 21:05:14 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:22:47.011 21:05:14 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 97 Data Units Written: 7 Host Read Commands: 2104 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:22:47.011 21:05:14 -- dd/common.sh@130 -- # lbaf=04 00:22:47.012 21:05:14 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 97 Data Units Written: 7 Host Read Commands: 2104 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:22:47.012 21:05:14 -- dd/common.sh@132 -- # lbaf=4096 00:22:47.012 21:05:14 -- dd/common.sh@134 -- # echo 4096 00:22:47.012 21:05:14 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:22:47.012 21:05:14 -- dd/basic_rw.sh@96 -- # : 00:22:47.012 21:05:14 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:47.012 21:05:14 -- dd/basic_rw.sh@96 -- # gen_conf 00:22:47.012 21:05:14 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:22:47.012 21:05:14 -- dd/common.sh@31 -- # xtrace_disable 00:22:47.012 21:05:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:47.012 21:05:14 -- common/autotest_common.sh@10 -- # set +x 00:22:47.012 21:05:14 -- common/autotest_common.sh@10 -- # set +x 00:22:47.012 ************************************ 00:22:47.012 START TEST dd_bs_lt_native_bs 00:22:47.012 ************************************ 00:22:47.012 21:05:14 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:47.012 21:05:14 -- common/autotest_common.sh@640 -- # local es=0 00:22:47.012 21:05:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:47.012 21:05:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:47.012 21:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:47.012 21:05:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:47.012 21:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:47.012 21:05:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:47.012 21:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:47.012 21:05:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:47.012 21:05:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:22:47.012 21:05:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:22:47.012 { 00:22:47.012 "subsystems": [ 00:22:47.012 { 00:22:47.012 "subsystem": "bdev", 00:22:47.012 "config": [ 00:22:47.012 { 00:22:47.012 "params": { 00:22:47.012 "trtype": "pcie", 00:22:47.012 "traddr": "0000:00:06.0", 00:22:47.012 "name": "Nvme0" 00:22:47.012 }, 00:22:47.012 "method": "bdev_nvme_attach_controller" 00:22:47.012 }, 00:22:47.012 { 00:22:47.012 "method": "bdev_wait_for_examine" 00:22:47.012 } 00:22:47.012 ] 00:22:47.012 } 00:22:47.012 ] 00:22:47.012 } 00:22:47.012 [2024-06-09 21:05:15.043822] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:47.012 [2024-06-09 21:05:15.044022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127372 ] 00:22:47.271 [2024-06-09 21:05:15.210742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.271 [2024-06-09 21:05:15.384726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.529 [2024-06-09 21:05:15.701792] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:22:47.529 [2024-06-09 21:05:15.701908] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:48.465 [2024-06-09 21:05:16.289561] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:22:48.724 21:05:16 -- common/autotest_common.sh@643 -- # es=234 00:22:48.724 21:05:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:48.724 21:05:16 -- common/autotest_common.sh@652 -- # es=106 00:22:48.724 21:05:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:22:48.724 21:05:16 -- common/autotest_common.sh@660 -- # es=1 00:22:48.724 21:05:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:48.724 00:22:48.724 real 0m1.683s 00:22:48.724 user 0m1.428s 00:22:48.724 sys 0m0.215s 00:22:48.724 21:05:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.724 21:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:48.724 ************************************ 00:22:48.724 END TEST dd_bs_lt_native_bs 00:22:48.724 ************************************ 00:22:48.724 21:05:16 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:22:48.724 21:05:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:48.724 21:05:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:48.724 21:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:48.724 ************************************ 00:22:48.724 START TEST dd_rw 00:22:48.724 ************************************ 00:22:48.724 21:05:16 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:22:48.724 21:05:16 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:22:48.724 21:05:16 -- dd/basic_rw.sh@12 -- # local count size 00:22:48.724 21:05:16 -- dd/basic_rw.sh@13 -- # local qds bss 00:22:48.724 21:05:16 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:22:48.724 21:05:16 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:22:48.724 21:05:16 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:22:48.724 21:05:16 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:22:48.724 21:05:16 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:22:48.724 21:05:16 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:22:48.724 21:05:16 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:22:48.724 21:05:16 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:22:48.724 21:05:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:48.724 21:05:16 -- dd/basic_rw.sh@23 -- # count=15 00:22:48.724 21:05:16 -- dd/basic_rw.sh@24 -- # count=15 00:22:48.724 21:05:16 -- dd/basic_rw.sh@25 -- # size=61440 00:22:48.724 21:05:16 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:22:48.724 21:05:16 -- dd/common.sh@98 -- # xtrace_disable 00:22:48.724 21:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:49.292 21:05:17 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:22:49.292 21:05:17 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:49.292 21:05:17 -- dd/common.sh@31 -- # xtrace_disable 00:22:49.292 21:05:17 -- common/autotest_common.sh@10 -- # set +x 00:22:49.292 { 00:22:49.292 "subsystems": [ 00:22:49.292 { 00:22:49.292 "subsystem": "bdev", 00:22:49.292 "config": [ 00:22:49.292 { 00:22:49.292 "params": { 00:22:49.292 "trtype": "pcie", 00:22:49.292 "traddr": "0000:00:06.0", 00:22:49.292 "name": "Nvme0" 00:22:49.292 }, 00:22:49.292 "method": "bdev_nvme_attach_controller" 00:22:49.292 }, 00:22:49.292 { 00:22:49.292 "method": "bdev_wait_for_examine" 00:22:49.292 } 00:22:49.292 ] 00:22:49.292 } 00:22:49.292 ] 00:22:49.292 } 00:22:49.292 [2024-06-09 21:05:17.323113] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:49.292 [2024-06-09 21:05:17.323317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127425 ] 00:22:49.550 [2024-06-09 21:05:17.488827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.550 [2024-06-09 21:05:17.667096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.685  Copying: 60/60 [kB] (average 19 MBps) 00:22:50.685 00:22:50.944 21:05:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:22:50.944 21:05:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:50.944 21:05:18 -- dd/common.sh@31 -- # xtrace_disable 00:22:50.944 21:05:18 -- common/autotest_common.sh@10 -- # set +x 00:22:50.944 [2024-06-09 21:05:18.920366] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:50.944 [2024-06-09 21:05:18.920537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127451 ] 00:22:50.944 { 00:22:50.944 "subsystems": [ 00:22:50.944 { 00:22:50.944 "subsystem": "bdev", 00:22:50.944 "config": [ 00:22:50.944 { 00:22:50.944 "params": { 00:22:50.944 "trtype": "pcie", 00:22:50.944 "traddr": "0000:00:06.0", 00:22:50.944 "name": "Nvme0" 00:22:50.944 }, 00:22:50.944 "method": "bdev_nvme_attach_controller" 00:22:50.944 }, 00:22:50.944 { 00:22:50.944 "method": "bdev_wait_for_examine" 00:22:50.944 } 00:22:50.945 ] 00:22:50.945 } 00:22:50.945 ] 00:22:50.945 } 00:22:50.945 [2024-06-09 21:05:19.073757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.204 [2024-06-09 21:05:19.231794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.444  Copying: 60/60 [kB] (average 19 MBps) 00:22:52.444 00:22:52.444 21:05:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:52.444 21:05:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:22:52.444 21:05:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:52.444 21:05:20 -- dd/common.sh@11 -- # local nvme_ref= 00:22:52.444 21:05:20 -- dd/common.sh@12 -- # local size=61440 00:22:52.444 21:05:20 -- dd/common.sh@14 -- # local bs=1048576 00:22:52.444 21:05:20 -- dd/common.sh@15 -- # local count=1 00:22:52.444 21:05:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:52.444 21:05:20 -- dd/common.sh@18 -- # gen_conf 00:22:52.444 21:05:20 -- dd/common.sh@31 -- # xtrace_disable 00:22:52.444 21:05:20 -- common/autotest_common.sh@10 -- # set +x 00:22:52.444 { 00:22:52.444 "subsystems": [ 00:22:52.444 { 00:22:52.444 "subsystem": "bdev", 00:22:52.444 "config": [ 00:22:52.444 { 00:22:52.444 "params": { 00:22:52.444 "trtype": "pcie", 00:22:52.444 "traddr": "0000:00:06.0", 00:22:52.444 "name": "Nvme0" 00:22:52.444 }, 00:22:52.444 "method": "bdev_nvme_attach_controller" 00:22:52.444 }, 00:22:52.444 { 00:22:52.444 "method": "bdev_wait_for_examine" 00:22:52.444 } 00:22:52.444 ] 00:22:52.444 } 00:22:52.444 ] 00:22:52.444 } 00:22:52.444 [2024-06-09 21:05:20.585590] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:52.445 [2024-06-09 21:05:20.585822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127480 ] 00:22:52.703 [2024-06-09 21:05:20.752012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.961 [2024-06-09 21:05:20.908989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.153  Copying: 1024/1024 [kB] (average 500 MBps) 00:22:54.153 00:22:54.153 21:05:22 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:54.153 21:05:22 -- dd/basic_rw.sh@23 -- # count=15 00:22:54.153 21:05:22 -- dd/basic_rw.sh@24 -- # count=15 00:22:54.153 21:05:22 -- dd/basic_rw.sh@25 -- # size=61440 00:22:54.153 21:05:22 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:22:54.153 21:05:22 -- dd/common.sh@98 -- # xtrace_disable 00:22:54.153 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:54.719 21:05:22 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:22:54.719 21:05:22 -- dd/basic_rw.sh@30 -- # gen_conf 00:22:54.719 21:05:22 -- dd/common.sh@31 -- # xtrace_disable 00:22:54.719 21:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:54.719 { 00:22:54.719 "subsystems": [ 00:22:54.719 { 00:22:54.719 "subsystem": "bdev", 00:22:54.719 "config": [ 00:22:54.719 { 00:22:54.719 "params": { 00:22:54.719 "trtype": "pcie", 00:22:54.719 "traddr": "0000:00:06.0", 00:22:54.719 "name": "Nvme0" 00:22:54.719 }, 00:22:54.719 "method": "bdev_nvme_attach_controller" 00:22:54.719 }, 00:22:54.719 { 00:22:54.719 "method": "bdev_wait_for_examine" 00:22:54.719 } 00:22:54.719 ] 00:22:54.719 } 00:22:54.719 ] 00:22:54.719 } 00:22:54.719 [2024-06-09 21:05:22.772474] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:54.719 [2024-06-09 21:05:22.772699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127514 ] 00:22:54.977 [2024-06-09 21:05:22.938170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.977 [2024-06-09 21:05:23.114706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.480  Copying: 60/60 [kB] (average 58 MBps) 00:22:56.480 00:22:56.480 21:05:24 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:22:56.480 21:05:24 -- dd/basic_rw.sh@37 -- # gen_conf 00:22:56.480 21:05:24 -- dd/common.sh@31 -- # xtrace_disable 00:22:56.480 21:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:56.480 { 00:22:56.480 "subsystems": [ 00:22:56.480 { 00:22:56.480 "subsystem": "bdev", 00:22:56.480 "config": [ 00:22:56.480 { 00:22:56.480 "params": { 00:22:56.480 "trtype": "pcie", 00:22:56.480 "traddr": "0000:00:06.0", 00:22:56.480 "name": "Nvme0" 00:22:56.480 }, 00:22:56.480 "method": "bdev_nvme_attach_controller" 00:22:56.480 }, 00:22:56.480 { 00:22:56.480 "method": "bdev_wait_for_examine" 00:22:56.480 } 00:22:56.480 ] 00:22:56.480 } 00:22:56.480 ] 00:22:56.480 } 00:22:56.480 [2024-06-09 21:05:24.471985] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:56.480 [2024-06-09 21:05:24.472218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127541 ] 00:22:56.480 [2024-06-09 21:05:24.636720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.738 [2024-06-09 21:05:24.791670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.929  Copying: 60/60 [kB] (average 58 MBps) 00:22:57.929 00:22:57.929 21:05:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:22:57.929 21:05:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:22:57.929 21:05:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:22:57.929 21:05:26 -- dd/common.sh@11 -- # local nvme_ref= 00:22:57.929 21:05:26 -- dd/common.sh@12 -- # local size=61440 00:22:57.929 21:05:26 -- dd/common.sh@14 -- # local bs=1048576 00:22:57.929 21:05:26 -- dd/common.sh@15 -- # local count=1 00:22:57.929 21:05:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:22:57.929 21:05:26 -- dd/common.sh@18 -- # gen_conf 00:22:57.929 21:05:26 -- dd/common.sh@31 -- # xtrace_disable 00:22:57.929 21:05:26 -- common/autotest_common.sh@10 -- # set +x 00:22:57.929 { 00:22:57.929 "subsystems": [ 00:22:57.929 { 00:22:57.929 "subsystem": "bdev", 00:22:57.929 "config": [ 00:22:57.929 { 00:22:57.929 "params": { 00:22:57.929 "trtype": "pcie", 00:22:57.929 "traddr": "0000:00:06.0", 00:22:57.929 "name": "Nvme0" 00:22:57.929 }, 00:22:57.929 "method": "bdev_nvme_attach_controller" 00:22:57.929 }, 00:22:57.929 { 00:22:57.929 "method": "bdev_wait_for_examine" 00:22:57.929 } 00:22:57.929 ] 00:22:57.929 } 00:22:57.929 ] 00:22:57.929 } 00:22:57.929 [2024-06-09 21:05:26.101844] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:57.929 [2024-06-09 21:05:26.102027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127565 ] 00:22:58.187 [2024-06-09 21:05:26.269451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.445 [2024-06-09 21:05:26.443807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.638  Copying: 1024/1024 [kB] (average 500 MBps) 00:22:59.638 00:22:59.638 21:05:27 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:22:59.638 21:05:27 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:22:59.638 21:05:27 -- dd/basic_rw.sh@23 -- # count=7 00:22:59.638 21:05:27 -- dd/basic_rw.sh@24 -- # count=7 00:22:59.638 21:05:27 -- dd/basic_rw.sh@25 -- # size=57344 00:22:59.638 21:05:27 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:22:59.638 21:05:27 -- dd/common.sh@98 -- # xtrace_disable 00:22:59.638 21:05:27 -- common/autotest_common.sh@10 -- # set +x 00:23:00.204 21:05:28 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:23:00.204 21:05:28 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:00.204 21:05:28 -- dd/common.sh@31 -- # xtrace_disable 00:23:00.204 21:05:28 -- common/autotest_common.sh@10 -- # set +x 00:23:00.204 { 00:23:00.204 "subsystems": [ 00:23:00.204 { 00:23:00.204 "subsystem": "bdev", 00:23:00.204 "config": [ 00:23:00.204 { 00:23:00.204 "params": { 00:23:00.204 "trtype": "pcie", 00:23:00.204 "traddr": "0000:00:06.0", 00:23:00.204 "name": "Nvme0" 00:23:00.204 }, 00:23:00.204 "method": "bdev_nvme_attach_controller" 00:23:00.204 }, 00:23:00.204 { 00:23:00.204 "method": "bdev_wait_for_examine" 00:23:00.204 } 00:23:00.204 ] 00:23:00.204 } 00:23:00.204 ] 00:23:00.204 } 00:23:00.204 [2024-06-09 21:05:28.335090] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:00.204 [2024-06-09 21:05:28.335301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127601 ] 00:23:00.462 [2024-06-09 21:05:28.504721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.721 [2024-06-09 21:05:28.694337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.940  Copying: 56/56 [kB] (average 54 MBps) 00:23:01.940 00:23:01.940 21:05:29 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:23:01.940 21:05:29 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:01.940 21:05:29 -- dd/common.sh@31 -- # xtrace_disable 00:23:01.940 21:05:29 -- common/autotest_common.sh@10 -- # set +x 00:23:01.940 [2024-06-09 21:05:29.978060] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:01.940 [2024-06-09 21:05:29.978441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127626 ] 00:23:01.940 { 00:23:01.940 "subsystems": [ 00:23:01.940 { 00:23:01.940 "subsystem": "bdev", 00:23:01.940 "config": [ 00:23:01.940 { 00:23:01.940 "params": { 00:23:01.940 "trtype": "pcie", 00:23:01.940 "traddr": "0000:00:06.0", 00:23:01.940 "name": "Nvme0" 00:23:01.940 }, 00:23:01.940 "method": "bdev_nvme_attach_controller" 00:23:01.940 }, 00:23:01.940 { 00:23:01.940 "method": "bdev_wait_for_examine" 00:23:01.940 } 00:23:01.940 ] 00:23:01.940 } 00:23:01.940 ] 00:23:01.940 } 00:23:02.215 [2024-06-09 21:05:30.133221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.215 [2024-06-09 21:05:30.311184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.851  Copying: 56/56 [kB] (average 27 MBps) 00:23:03.851 00:23:03.851 21:05:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:03.851 21:05:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:23:03.851 21:05:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:03.851 21:05:31 -- dd/common.sh@11 -- # local nvme_ref= 00:23:03.851 21:05:31 -- dd/common.sh@12 -- # local size=57344 00:23:03.851 21:05:31 -- dd/common.sh@14 -- # local bs=1048576 00:23:03.851 21:05:31 -- dd/common.sh@15 -- # local count=1 00:23:03.851 21:05:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:03.851 21:05:31 -- dd/common.sh@18 -- # gen_conf 00:23:03.851 21:05:31 -- dd/common.sh@31 -- # xtrace_disable 00:23:03.851 21:05:31 -- common/autotest_common.sh@10 -- # set +x 00:23:03.851 [2024-06-09 21:05:31.697227] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:03.851 [2024-06-09 21:05:31.697997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127654 ] 00:23:03.851 { 00:23:03.851 "subsystems": [ 00:23:03.851 { 00:23:03.851 "subsystem": "bdev", 00:23:03.851 "config": [ 00:23:03.851 { 00:23:03.851 "params": { 00:23:03.851 "trtype": "pcie", 00:23:03.851 "traddr": "0000:00:06.0", 00:23:03.851 "name": "Nvme0" 00:23:03.851 }, 00:23:03.851 "method": "bdev_nvme_attach_controller" 00:23:03.851 }, 00:23:03.851 { 00:23:03.851 "method": "bdev_wait_for_examine" 00:23:03.851 } 00:23:03.851 ] 00:23:03.851 } 00:23:03.851 ] 00:23:03.851 } 00:23:03.851 [2024-06-09 21:05:31.852668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.851 [2024-06-09 21:05:32.012079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.365  Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:05.365 00:23:05.365 21:05:33 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:05.365 21:05:33 -- dd/basic_rw.sh@23 -- # count=7 00:23:05.365 21:05:33 -- dd/basic_rw.sh@24 -- # count=7 00:23:05.365 21:05:33 -- dd/basic_rw.sh@25 -- # size=57344 00:23:05.365 21:05:33 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:23:05.365 21:05:33 -- dd/common.sh@98 -- # xtrace_disable 00:23:05.365 21:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.624 21:05:33 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:23:05.624 21:05:33 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:05.624 21:05:33 -- dd/common.sh@31 -- # xtrace_disable 00:23:05.624 21:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.624 { 00:23:05.624 "subsystems": [ 00:23:05.624 { 00:23:05.624 "subsystem": "bdev", 00:23:05.624 "config": [ 00:23:05.624 { 00:23:05.624 "params": { 00:23:05.624 "trtype": "pcie", 00:23:05.624 "traddr": "0000:00:06.0", 00:23:05.624 "name": "Nvme0" 00:23:05.624 }, 00:23:05.624 "method": "bdev_nvme_attach_controller" 00:23:05.624 }, 00:23:05.624 { 00:23:05.624 "method": "bdev_wait_for_examine" 00:23:05.624 } 00:23:05.624 ] 00:23:05.624 } 00:23:05.624 ] 00:23:05.624 } 00:23:05.624 [2024-06-09 21:05:33.800668] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:05.624 [2024-06-09 21:05:33.800875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127689 ] 00:23:05.883 [2024-06-09 21:05:33.967943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.141 [2024-06-09 21:05:34.139261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.337  Copying: 56/56 [kB] (average 54 MBps) 00:23:07.337 00:23:07.337 21:05:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:23:07.337 21:05:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:07.337 21:05:35 -- dd/common.sh@31 -- # xtrace_disable 00:23:07.337 21:05:35 -- common/autotest_common.sh@10 -- # set +x 00:23:07.595 { 00:23:07.595 "subsystems": [ 00:23:07.595 { 00:23:07.595 "subsystem": "bdev", 00:23:07.595 "config": [ 00:23:07.595 { 00:23:07.595 "params": { 00:23:07.595 "trtype": "pcie", 00:23:07.595 "traddr": "0000:00:06.0", 00:23:07.595 "name": "Nvme0" 00:23:07.595 }, 00:23:07.596 "method": "bdev_nvme_attach_controller" 00:23:07.596 }, 00:23:07.596 { 00:23:07.596 "method": "bdev_wait_for_examine" 00:23:07.596 } 00:23:07.596 ] 00:23:07.596 } 00:23:07.596 ] 00:23:07.596 } 00:23:07.596 [2024-06-09 21:05:35.526708] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:07.596 [2024-06-09 21:05:35.526958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127716 ] 00:23:07.596 [2024-06-09 21:05:35.688999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.854 [2024-06-09 21:05:35.877351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.047  Copying: 56/56 [kB] (average 54 MBps) 00:23:09.047 00:23:09.047 21:05:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:09.047 21:05:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:23:09.047 21:05:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:09.047 21:05:37 -- dd/common.sh@11 -- # local nvme_ref= 00:23:09.047 21:05:37 -- dd/common.sh@12 -- # local size=57344 00:23:09.047 21:05:37 -- dd/common.sh@14 -- # local bs=1048576 00:23:09.047 21:05:37 -- dd/common.sh@15 -- # local count=1 00:23:09.047 21:05:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:09.047 21:05:37 -- dd/common.sh@18 -- # gen_conf 00:23:09.047 21:05:37 -- dd/common.sh@31 -- # xtrace_disable 00:23:09.047 21:05:37 -- common/autotest_common.sh@10 -- # set +x 00:23:09.047 [2024-06-09 21:05:37.182714] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:09.047 [2024-06-09 21:05:37.182935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127742 ] 00:23:09.047 { 00:23:09.047 "subsystems": [ 00:23:09.047 { 00:23:09.047 "subsystem": "bdev", 00:23:09.047 "config": [ 00:23:09.047 { 00:23:09.047 "params": { 00:23:09.047 "trtype": "pcie", 00:23:09.047 "traddr": "0000:00:06.0", 00:23:09.047 "name": "Nvme0" 00:23:09.047 }, 00:23:09.047 "method": "bdev_nvme_attach_controller" 00:23:09.047 }, 00:23:09.047 { 00:23:09.047 "method": "bdev_wait_for_examine" 00:23:09.047 } 00:23:09.047 ] 00:23:09.047 } 00:23:09.047 ] 00:23:09.047 } 00:23:09.305 [2024-06-09 21:05:37.337445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.564 [2024-06-09 21:05:37.517446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.761  Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:10.761 00:23:10.761 21:05:38 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:23:10.761 21:05:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:10.761 21:05:38 -- dd/basic_rw.sh@23 -- # count=3 00:23:10.761 21:05:38 -- dd/basic_rw.sh@24 -- # count=3 00:23:10.761 21:05:38 -- dd/basic_rw.sh@25 -- # size=49152 00:23:10.761 21:05:38 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:23:10.761 21:05:38 -- dd/common.sh@98 -- # xtrace_disable 00:23:10.761 21:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:11.329 21:05:39 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:23:11.329 21:05:39 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:11.329 21:05:39 -- dd/common.sh@31 -- # xtrace_disable 00:23:11.329 21:05:39 -- common/autotest_common.sh@10 -- # set +x 00:23:11.329 { 00:23:11.329 "subsystems": [ 00:23:11.329 { 00:23:11.329 "subsystem": "bdev", 00:23:11.329 "config": [ 00:23:11.329 { 00:23:11.329 "params": { 00:23:11.329 "trtype": "pcie", 00:23:11.329 "traddr": "0000:00:06.0", 00:23:11.329 "name": "Nvme0" 00:23:11.329 }, 00:23:11.329 "method": "bdev_nvme_attach_controller" 00:23:11.329 }, 00:23:11.329 { 00:23:11.329 "method": "bdev_wait_for_examine" 00:23:11.329 } 00:23:11.329 ] 00:23:11.329 } 00:23:11.329 ] 00:23:11.329 } 00:23:11.329 [2024-06-09 21:05:39.319440] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:11.329 [2024-06-09 21:05:39.319644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127769 ] 00:23:11.329 [2024-06-09 21:05:39.489802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.588 [2024-06-09 21:05:39.670387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.814  Copying: 48/48 [kB] (average 46 MBps) 00:23:12.814 00:23:12.814 21:05:40 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:23:12.814 21:05:40 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:12.814 21:05:40 -- dd/common.sh@31 -- # xtrace_disable 00:23:12.814 21:05:40 -- common/autotest_common.sh@10 -- # set +x 00:23:12.814 { 00:23:12.814 "subsystems": [ 00:23:12.814 { 00:23:12.814 "subsystem": "bdev", 00:23:12.814 "config": [ 00:23:12.814 { 00:23:12.814 "params": { 00:23:12.814 "trtype": "pcie", 00:23:12.814 "traddr": "0000:00:06.0", 00:23:12.814 "name": "Nvme0" 00:23:12.814 }, 00:23:12.814 "method": "bdev_nvme_attach_controller" 00:23:12.814 }, 00:23:12.814 { 00:23:12.814 "method": "bdev_wait_for_examine" 00:23:12.814 } 00:23:12.814 ] 00:23:12.814 } 00:23:12.814 ] 00:23:12.814 } 00:23:12.814 [2024-06-09 21:05:40.974840] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:12.814 [2024-06-09 21:05:40.975043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127801 ] 00:23:13.073 [2024-06-09 21:05:41.141373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.332 [2024-06-09 21:05:41.311767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.526  Copying: 48/48 [kB] (average 46 MBps) 00:23:14.526 00:23:14.526 21:05:42 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:14.526 21:05:42 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:23:14.526 21:05:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:14.526 21:05:42 -- dd/common.sh@11 -- # local nvme_ref= 00:23:14.526 21:05:42 -- dd/common.sh@12 -- # local size=49152 00:23:14.526 21:05:42 -- dd/common.sh@14 -- # local bs=1048576 00:23:14.526 21:05:42 -- dd/common.sh@15 -- # local count=1 00:23:14.526 21:05:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:14.526 21:05:42 -- dd/common.sh@18 -- # gen_conf 00:23:14.526 21:05:42 -- dd/common.sh@31 -- # xtrace_disable 00:23:14.526 21:05:42 -- common/autotest_common.sh@10 -- # set +x 00:23:14.526 [2024-06-09 21:05:42.659583] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:14.526 [2024-06-09 21:05:42.660343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127829 ] 00:23:14.526 { 00:23:14.526 "subsystems": [ 00:23:14.526 { 00:23:14.526 "subsystem": "bdev", 00:23:14.526 "config": [ 00:23:14.526 { 00:23:14.526 "params": { 00:23:14.526 "trtype": "pcie", 00:23:14.527 "traddr": "0000:00:06.0", 00:23:14.527 "name": "Nvme0" 00:23:14.527 }, 00:23:14.527 "method": "bdev_nvme_attach_controller" 00:23:14.527 }, 00:23:14.527 { 00:23:14.527 "method": "bdev_wait_for_examine" 00:23:14.527 } 00:23:14.527 ] 00:23:14.527 } 00:23:14.527 ] 00:23:14.527 } 00:23:14.785 [2024-06-09 21:05:42.817227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.044 [2024-06-09 21:05:43.007402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.241  Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:16.241 00:23:16.241 21:05:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:23:16.241 21:05:44 -- dd/basic_rw.sh@23 -- # count=3 00:23:16.241 21:05:44 -- dd/basic_rw.sh@24 -- # count=3 00:23:16.241 21:05:44 -- dd/basic_rw.sh@25 -- # size=49152 00:23:16.241 21:05:44 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:23:16.241 21:05:44 -- dd/common.sh@98 -- # xtrace_disable 00:23:16.241 21:05:44 -- common/autotest_common.sh@10 -- # set +x 00:23:16.809 21:05:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:23:16.809 21:05:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:23:16.809 21:05:44 -- dd/common.sh@31 -- # xtrace_disable 00:23:16.809 21:05:44 -- common/autotest_common.sh@10 -- # set +x 00:23:16.809 { 00:23:16.809 "subsystems": [ 00:23:16.809 { 00:23:16.809 "subsystem": "bdev", 00:23:16.809 "config": [ 00:23:16.809 { 00:23:16.809 "params": { 00:23:16.809 "trtype": "pcie", 00:23:16.809 "traddr": "0000:00:06.0", 00:23:16.809 "name": "Nvme0" 00:23:16.809 }, 00:23:16.809 "method": "bdev_nvme_attach_controller" 00:23:16.809 }, 00:23:16.809 { 00:23:16.809 "method": "bdev_wait_for_examine" 00:23:16.809 } 00:23:16.809 ] 00:23:16.809 } 00:23:16.809 ] 00:23:16.809 } 00:23:16.809 [2024-06-09 21:05:44.744098] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:16.809 [2024-06-09 21:05:44.744307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127863 ] 00:23:16.809 [2024-06-09 21:05:44.912297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.068 [2024-06-09 21:05:45.068540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.263  Copying: 48/48 [kB] (average 46 MBps) 00:23:18.263 00:23:18.263 21:05:46 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:23:18.263 21:05:46 -- dd/basic_rw.sh@37 -- # gen_conf 00:23:18.264 21:05:46 -- dd/common.sh@31 -- # xtrace_disable 00:23:18.264 21:05:46 -- common/autotest_common.sh@10 -- # set +x 00:23:18.264 { 00:23:18.264 "subsystems": [ 00:23:18.264 { 00:23:18.264 "subsystem": "bdev", 00:23:18.264 "config": [ 00:23:18.264 { 00:23:18.264 "params": { 00:23:18.264 "trtype": "pcie", 00:23:18.264 "traddr": "0000:00:06.0", 00:23:18.264 "name": "Nvme0" 00:23:18.264 }, 00:23:18.264 "method": "bdev_nvme_attach_controller" 00:23:18.264 }, 00:23:18.264 { 00:23:18.264 "method": "bdev_wait_for_examine" 00:23:18.264 } 00:23:18.264 ] 00:23:18.264 } 00:23:18.264 ] 00:23:18.264 } 00:23:18.264 [2024-06-09 21:05:46.422924] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:18.264 [2024-06-09 21:05:46.423117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127883 ] 00:23:18.523 [2024-06-09 21:05:46.589752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.781 [2024-06-09 21:05:46.776864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.977  Copying: 48/48 [kB] (average 46 MBps) 00:23:19.977 00:23:19.977 21:05:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:19.977 21:05:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:23:19.977 21:05:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:19.977 21:05:48 -- dd/common.sh@11 -- # local nvme_ref= 00:23:19.977 21:05:48 -- dd/common.sh@12 -- # local size=49152 00:23:19.977 21:05:48 -- dd/common.sh@14 -- # local bs=1048576 00:23:19.977 21:05:48 -- dd/common.sh@15 -- # local count=1 00:23:19.977 21:05:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:19.977 21:05:48 -- dd/common.sh@18 -- # gen_conf 00:23:19.977 21:05:48 -- dd/common.sh@31 -- # xtrace_disable 00:23:19.977 21:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:19.977 { 00:23:19.977 "subsystems": [ 00:23:19.977 { 00:23:19.977 "subsystem": "bdev", 00:23:19.977 "config": [ 00:23:19.977 { 00:23:19.977 "params": { 00:23:19.977 "trtype": "pcie", 00:23:19.977 "traddr": "0000:00:06.0", 00:23:19.977 "name": "Nvme0" 00:23:19.977 }, 00:23:19.977 "method": "bdev_nvme_attach_controller" 00:23:19.977 }, 00:23:19.977 { 00:23:19.977 "method": "bdev_wait_for_examine" 00:23:19.977 } 00:23:19.977 ] 00:23:19.977 } 00:23:19.977 ] 00:23:19.977 } 00:23:19.977 [2024-06-09 21:05:48.095343] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:19.977 [2024-06-09 21:05:48.095561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127916 ] 00:23:20.237 [2024-06-09 21:05:48.263897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.495 [2024-06-09 21:05:48.437001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.689  Copying: 1024/1024 [kB] (average 1000 MBps) 00:23:21.689 00:23:21.689 ************************************ 00:23:21.689 END TEST dd_rw 00:23:21.689 ************************************ 00:23:21.689 00:23:21.689 real 0m33.071s 00:23:21.689 user 0m27.381s 00:23:21.689 sys 0m4.411s 00:23:21.689 21:05:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.689 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:21.689 21:05:49 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:23:21.689 21:05:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:21.689 21:05:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:21.689 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:21.689 ************************************ 00:23:21.689 START TEST dd_rw_offset 00:23:21.689 ************************************ 00:23:21.689 21:05:49 -- common/autotest_common.sh@1104 -- # basic_offset 00:23:21.689 21:05:49 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:23:21.689 21:05:49 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:23:21.689 21:05:49 -- dd/common.sh@98 -- # xtrace_disable 00:23:21.689 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:21.996 21:05:49 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:23:21.996 21:05:49 -- dd/basic_rw.sh@56 -- # data=8g92d33ffb3fli7mr4kpse5w2c53u2rayzrfq81022ep61zzrrfyc9w9d874soodgdae168ol82fp14z1v49x1swdjxrlpmt0ugid77ei1y9n2bad606i2hro4ms8n79bv7eguhtcpyl8faf6tpt3h0v6a7hml0bfe31jemb3k0muhoaef1emk05yxa3qthttxf218bfsjxknibmn5ss7xt0tw69ga54waht2k84xked3u635fbqqo54g267dc81b6fpvw3aqd45y0oojhxw9ay5lfmisid2gikpeml2t4iovla0gb9strvmatzd247y28flbvc6919eynbvyb5t6h8fqe5lfiugqrzhi2e1tfs4pli0f81knocamd8urfjag2dmwauw9mol830jk3xbh4fcp7nr9bhn9qv733fpp8eavytwy1rix6air80cv1hq8q3ugjkddd2fjz2lfe6g2ff75smzgomk5ywwsp62es8dttmca1zrvlbsps8u14noge6qbouzcp2kcz6zwluvh4yb472std00l588hi84j2dza6vaykoawqjjolu3c8usztviwo7l0719xni60jd2us5sqfj31vu2br553u97ijcy0zavnupkt9fmq4et8errrngophgspugdpwq0lnftzyhejidk35jhhh0ns0i27qaoec0qasfhw4cxr8mwq2rwplz8xh322f9hcqkf2bxcaxem1ni83mmvsmlt1aygfx4yp6e81iio731tf2as02co1jv36qjbz4yc6o801qc4ae57mb1uvwxsfzj8l61vdmha525d25he6rw02y2e4hxgj4c6ri4l02js25vmh1mtczsyuqtpzl3iqvzkonp4dsippb33o2ulstp63wlpdvms1asa07cdrzqiuuy9s363n59e4q6u4ng5d8ws4dblj41cepj7rpwlz4q09oydbif9owglyovrix1mien3dqs348pgpo8r629zthw0qzwuuksg9dgldjn3orrdzaizzdqqc8hs2fm0jn3orkkmy0mu2yv0t7ti2j7d4igellaiaz96we4jkphubc9j3qbygj2xppwianuzqg7ihi69bo6hur8mwdlp3cmabm69qefsej3pfjej8o92865rrm7bql56r3dlbni0yeew0x1e1ci391acrv0m6xvqd7a72p3lta9mvb4i9jswxb3xiaycvb3rn794m1kr1dtphtiu23bfspabgcafjob8rs1y0jxgmeq56tgwacfpnbuwkb8r10eal2zml62acpnxmuesvejg62ls4dva0jj2ua8fhdhf9az7v2dav8buc6ujqr5nrdl6y1b7parhe5nu1a1imtvif0s915nre4uopu5vk7cn9ifkovdf8hb0o8ixagrhheaaws6zwby1347oor42r1fh5xgpuvre7fz6b9f7a8ykyzuewsgjv22tklsb1b6cmltfxtejrapmk457y7lk08d2nsl4widlgswsoq8dilnhli0w4f6hlp1mws9p87oclfpybq05jnp2zwbxuycu9r630vepragyrxtb7uvuv2j61sxnctvngl2e709dzi3v4m2p6pwin0uzys8md9purq562ye8m7743cxfkdyurshil796qo4l22nk3o9jc72r8fr23loses459nzvpqdf3vivy76ygy7rck7rwfwauj1lxl2en296r22g3ibl8zph61tel5n6rvda8lbjjw2bcnv3cv1a44884ow5n5m8drdqhzyrh47n1qg98zl407hqk8y3ephy7uy8d27lnzcxzfvsz1j5djhtyr32vlnc4hp2ooxamquah37l8kixb0z036otqrsluex3u7wt52j4goe30c0sqvhijep8u4n9wfg9bm0n6sd7741evrce8cjfiryen9apw6kfkmnv928vw22lnckhqdfdn92y9trlfah2d8ljxm1xq6xfdgjapsel33s4yledkt47rajbl6iwsu7xscbqjm0tmoy8hkjsc5e3o18uaio7lzxncapruy5cx2phnohj76qm84l4ioiypfuq69c12ih6x4hiwp02bv0kob3r4r3udwh24urthx6kq6jm0yz1vg2bf50zbki16r7h5mo2c822bvwqfmdo97rn39vcvpqd17zb52s2nv0ut5rfw7a6gjt13dfezmnuf0s1tetditjce4jjlely9azi6j50gpi64g8az47648ng0jdp50cv7ev7eejowg3phu0ikgn6vas7q5v6tcaudj3ay8wfkeaoexiyihnesm70tve2cvy7x02rl235nh9bcwqmm48xojmzsuu7fblsv531dxqcnaj824tzisjzf5n1qeq4kilzrpwwefku4iu815xnif9ympj8dt5eci57l7z9o6k7m7higdiqoyt4y1mrorjgnwjh5h4l8zd40o1a5oaoznamdgcormehtknyf8qgd8u92gnq3abeennctbu4om7w25ges5rgpqyh2e76g2eqinyjqfzr309jltgtewhhxv13kmusisqq1o96y3mtgpo64ku4gtm16nwseug837uepza80nefbcm57r0jcj2hn5m42enzevx6lpevdbnv7l5jrjuhtmbn1p3nehimw3td7gwjw7ikpjk9de6hfjlnbeim3ueit6smofjd89szxppid68jniekiix3wpspfhk6f5a43qnv8q35osafvjxu381f1eeb9jwk4nvl2xt2zfx2yb2j9fho3ft0qz3oxojs4l9imin4wt6mi09zlf3dcdjqzu28b66y9occ18xvjtiiore8nd290uehjqfnk0v5ney99smarrf8zpgrgvzlwstz2ljliyg2mrak8nr5a88nsdngxo7i9ill6co4asd5gmjswzhbjfjudx11eflhcglq6fmaqseiwhubaro36osl3vzdou7j363o4qpf6t265972ou4nqr5yz61u1b8credas2gtg98rnd6h2rbytu5n4h3vivnivm99ivqwhdkydrcpt0pg1max7u8sgcql0xipnr7wlyxwx4ed5wj6pdln0ko5fa5squsaqk4i8uxluimk5fbupl7647eqhv61vsxmpgv0dj9ikw2sjhinedwneb9rphyi2j23nczcl1v3mitjqb47bj5w4q5vgdpoplx3mtd0lnxy9a1bd97moihs4a58v9xyfqeq2fo8o04s9btb27dewfa3v3xq00gv1t4u7dzgwdlu8fwldrzww9qs6w3upvz1ns4mu7sgqp36wj0a9blvjs8u84p2uvlvs4ig34fs8s9qrhzxn40rm6lppszhnck6xarirl9mh46k8bw18696v28okbdwczt58rojvh7743i5lo2or0hnao5mnozueir4o47fvtas1gig7j7hrrtjisrbfx7jjswcfw7jhvwb534mkfovc3h77x0dm3080ogtel4swa0pt62hn2ncbxnx5btbhfrl0fyydxuanfqzwtzllnglw6pxms9wxo98imzdgfgu8hfag6irxyqiyvhe3de0umgw4wonw49okpewr66en4p7twmc5kzy2fx6fkk7zew8l6jkbj6atkc2mi15dx75u6vjqurb45e937l8yhictwybo0hxs2omtxnfrknkj01gvuagyds2ovaujeb0uodnx0hsdft0pfehjwmool1upniuek4qus1sky2uc8d2yk2cpa82a3iqf6cne3myklxv32p3f1edkxai72vwbl0isfoolaun2xjrx5b0fgb018tgv602s3ffwr5c9i5w7ghcijphgbkwkd5ovu20b6q9c15si21bz57nwgqrezgxn27ol9lib4bbk2h8gy93az3w5n3di5wpqvnlp1zvk5dwwbmfz35hzl5mbpof1q50eh070qn9ypve31aqdln6y3kisw59gjkwcow0j6gltikfpkf4yyo31mwby6k8dcl6vwfgjlp1h51kductc6y0zjfzpjumsfvfpqf02e57ytxs5q9jmvmadie4qgikwsvxabp2su8ui8shivqqfzj31qnn63yzye13htmwetnfat7nz69wtk8doiiu 00:23:21.996 21:05:49 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:23:21.996 21:05:49 -- dd/basic_rw.sh@59 -- # gen_conf 00:23:21.996 21:05:49 -- dd/common.sh@31 -- # xtrace_disable 00:23:21.996 21:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:21.996 { 00:23:21.996 "subsystems": [ 00:23:21.996 { 00:23:21.996 "subsystem": "bdev", 00:23:21.996 "config": [ 00:23:21.996 { 00:23:21.996 "params": { 00:23:21.996 "trtype": "pcie", 00:23:21.996 "traddr": "0000:00:06.0", 00:23:21.996 "name": "Nvme0" 00:23:21.996 }, 00:23:21.996 "method": "bdev_nvme_attach_controller" 00:23:21.996 }, 00:23:21.996 { 00:23:21.996 "method": "bdev_wait_for_examine" 00:23:21.996 } 00:23:21.996 ] 00:23:21.996 } 00:23:21.996 ] 00:23:21.996 } 00:23:21.996 [2024-06-09 21:05:49.943369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:21.996 [2024-06-09 21:05:49.943739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127963 ] 00:23:21.996 [2024-06-09 21:05:50.111619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.256 [2024-06-09 21:05:50.300606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.449  Copying: 4096/4096 [B] (average 4000 kBps) 00:23:23.449 00:23:23.449 21:05:51 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:23:23.449 21:05:51 -- dd/basic_rw.sh@65 -- # gen_conf 00:23:23.449 21:05:51 -- dd/common.sh@31 -- # xtrace_disable 00:23:23.449 21:05:51 -- common/autotest_common.sh@10 -- # set +x 00:23:23.449 { 00:23:23.449 "subsystems": [ 00:23:23.449 { 00:23:23.449 "subsystem": "bdev", 00:23:23.449 "config": [ 00:23:23.449 { 00:23:23.449 "params": { 00:23:23.449 "trtype": "pcie", 00:23:23.449 "traddr": "0000:00:06.0", 00:23:23.449 "name": "Nvme0" 00:23:23.449 }, 00:23:23.449 "method": "bdev_nvme_attach_controller" 00:23:23.449 }, 00:23:23.449 { 00:23:23.449 "method": "bdev_wait_for_examine" 00:23:23.449 } 00:23:23.449 ] 00:23:23.449 } 00:23:23.449 ] 00:23:23.449 } 00:23:23.449 [2024-06-09 21:05:51.605028] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:23.449 [2024-06-09 21:05:51.605363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127991 ] 00:23:23.708 [2024-06-09 21:05:51.772738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.966 [2024-06-09 21:05:51.932921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.161  Copying: 4096/4096 [B] (average 4000 kBps) 00:23:25.161 00:23:25.161 21:05:53 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:23:25.162 21:05:53 -- dd/basic_rw.sh@72 -- # [[ 8g92d33ffb3fli7mr4kpse5w2c53u2rayzrfq81022ep61zzrrfyc9w9d874soodgdae168ol82fp14z1v49x1swdjxrlpmt0ugid77ei1y9n2bad606i2hro4ms8n79bv7eguhtcpyl8faf6tpt3h0v6a7hml0bfe31jemb3k0muhoaef1emk05yxa3qthttxf218bfsjxknibmn5ss7xt0tw69ga54waht2k84xked3u635fbqqo54g267dc81b6fpvw3aqd45y0oojhxw9ay5lfmisid2gikpeml2t4iovla0gb9strvmatzd247y28flbvc6919eynbvyb5t6h8fqe5lfiugqrzhi2e1tfs4pli0f81knocamd8urfjag2dmwauw9mol830jk3xbh4fcp7nr9bhn9qv733fpp8eavytwy1rix6air80cv1hq8q3ugjkddd2fjz2lfe6g2ff75smzgomk5ywwsp62es8dttmca1zrvlbsps8u14noge6qbouzcp2kcz6zwluvh4yb472std00l588hi84j2dza6vaykoawqjjolu3c8usztviwo7l0719xni60jd2us5sqfj31vu2br553u97ijcy0zavnupkt9fmq4et8errrngophgspugdpwq0lnftzyhejidk35jhhh0ns0i27qaoec0qasfhw4cxr8mwq2rwplz8xh322f9hcqkf2bxcaxem1ni83mmvsmlt1aygfx4yp6e81iio731tf2as02co1jv36qjbz4yc6o801qc4ae57mb1uvwxsfzj8l61vdmha525d25he6rw02y2e4hxgj4c6ri4l02js25vmh1mtczsyuqtpzl3iqvzkonp4dsippb33o2ulstp63wlpdvms1asa07cdrzqiuuy9s363n59e4q6u4ng5d8ws4dblj41cepj7rpwlz4q09oydbif9owglyovrix1mien3dqs348pgpo8r629zthw0qzwuuksg9dgldjn3orrdzaizzdqqc8hs2fm0jn3orkkmy0mu2yv0t7ti2j7d4igellaiaz96we4jkphubc9j3qbygj2xppwianuzqg7ihi69bo6hur8mwdlp3cmabm69qefsej3pfjej8o92865rrm7bql56r3dlbni0yeew0x1e1ci391acrv0m6xvqd7a72p3lta9mvb4i9jswxb3xiaycvb3rn794m1kr1dtphtiu23bfspabgcafjob8rs1y0jxgmeq56tgwacfpnbuwkb8r10eal2zml62acpnxmuesvejg62ls4dva0jj2ua8fhdhf9az7v2dav8buc6ujqr5nrdl6y1b7parhe5nu1a1imtvif0s915nre4uopu5vk7cn9ifkovdf8hb0o8ixagrhheaaws6zwby1347oor42r1fh5xgpuvre7fz6b9f7a8ykyzuewsgjv22tklsb1b6cmltfxtejrapmk457y7lk08d2nsl4widlgswsoq8dilnhli0w4f6hlp1mws9p87oclfpybq05jnp2zwbxuycu9r630vepragyrxtb7uvuv2j61sxnctvngl2e709dzi3v4m2p6pwin0uzys8md9purq562ye8m7743cxfkdyurshil796qo4l22nk3o9jc72r8fr23loses459nzvpqdf3vivy76ygy7rck7rwfwauj1lxl2en296r22g3ibl8zph61tel5n6rvda8lbjjw2bcnv3cv1a44884ow5n5m8drdqhzyrh47n1qg98zl407hqk8y3ephy7uy8d27lnzcxzfvsz1j5djhtyr32vlnc4hp2ooxamquah37l8kixb0z036otqrsluex3u7wt52j4goe30c0sqvhijep8u4n9wfg9bm0n6sd7741evrce8cjfiryen9apw6kfkmnv928vw22lnckhqdfdn92y9trlfah2d8ljxm1xq6xfdgjapsel33s4yledkt47rajbl6iwsu7xscbqjm0tmoy8hkjsc5e3o18uaio7lzxncapruy5cx2phnohj76qm84l4ioiypfuq69c12ih6x4hiwp02bv0kob3r4r3udwh24urthx6kq6jm0yz1vg2bf50zbki16r7h5mo2c822bvwqfmdo97rn39vcvpqd17zb52s2nv0ut5rfw7a6gjt13dfezmnuf0s1tetditjce4jjlely9azi6j50gpi64g8az47648ng0jdp50cv7ev7eejowg3phu0ikgn6vas7q5v6tcaudj3ay8wfkeaoexiyihnesm70tve2cvy7x02rl235nh9bcwqmm48xojmzsuu7fblsv531dxqcnaj824tzisjzf5n1qeq4kilzrpwwefku4iu815xnif9ympj8dt5eci57l7z9o6k7m7higdiqoyt4y1mrorjgnwjh5h4l8zd40o1a5oaoznamdgcormehtknyf8qgd8u92gnq3abeennctbu4om7w25ges5rgpqyh2e76g2eqinyjqfzr309jltgtewhhxv13kmusisqq1o96y3mtgpo64ku4gtm16nwseug837uepza80nefbcm57r0jcj2hn5m42enzevx6lpevdbnv7l5jrjuhtmbn1p3nehimw3td7gwjw7ikpjk9de6hfjlnbeim3ueit6smofjd89szxppid68jniekiix3wpspfhk6f5a43qnv8q35osafvjxu381f1eeb9jwk4nvl2xt2zfx2yb2j9fho3ft0qz3oxojs4l9imin4wt6mi09zlf3dcdjqzu28b66y9occ18xvjtiiore8nd290uehjqfnk0v5ney99smarrf8zpgrgvzlwstz2ljliyg2mrak8nr5a88nsdngxo7i9ill6co4asd5gmjswzhbjfjudx11eflhcglq6fmaqseiwhubaro36osl3vzdou7j363o4qpf6t265972ou4nqr5yz61u1b8credas2gtg98rnd6h2rbytu5n4h3vivnivm99ivqwhdkydrcpt0pg1max7u8sgcql0xipnr7wlyxwx4ed5wj6pdln0ko5fa5squsaqk4i8uxluimk5fbupl7647eqhv61vsxmpgv0dj9ikw2sjhinedwneb9rphyi2j23nczcl1v3mitjqb47bj5w4q5vgdpoplx3mtd0lnxy9a1bd97moihs4a58v9xyfqeq2fo8o04s9btb27dewfa3v3xq00gv1t4u7dzgwdlu8fwldrzww9qs6w3upvz1ns4mu7sgqp36wj0a9blvjs8u84p2uvlvs4ig34fs8s9qrhzxn40rm6lppszhnck6xarirl9mh46k8bw18696v28okbdwczt58rojvh7743i5lo2or0hnao5mnozueir4o47fvtas1gig7j7hrrtjisrbfx7jjswcfw7jhvwb534mkfovc3h77x0dm3080ogtel4swa0pt62hn2ncbxnx5btbhfrl0fyydxuanfqzwtzllnglw6pxms9wxo98imzdgfgu8hfag6irxyqiyvhe3de0umgw4wonw49okpewr66en4p7twmc5kzy2fx6fkk7zew8l6jkbj6atkc2mi15dx75u6vjqurb45e937l8yhictwybo0hxs2omtxnfrknkj01gvuagyds2ovaujeb0uodnx0hsdft0pfehjwmool1upniuek4qus1sky2uc8d2yk2cpa82a3iqf6cne3myklxv32p3f1edkxai72vwbl0isfoolaun2xjrx5b0fgb018tgv602s3ffwr5c9i5w7ghcijphgbkwkd5ovu20b6q9c15si21bz57nwgqrezgxn27ol9lib4bbk2h8gy93az3w5n3di5wpqvnlp1zvk5dwwbmfz35hzl5mbpof1q50eh070qn9ypve31aqdln6y3kisw59gjkwcow0j6gltikfpkf4yyo31mwby6k8dcl6vwfgjlp1h51kductc6y0zjfzpjumsfvfpqf02e57ytxs5q9jmvmadie4qgikwsvxabp2su8ui8shivqqfzj31qnn63yzye13htmwetnfat7nz69wtk8doiiu == \8\g\9\2\d\3\3\f\f\b\3\f\l\i\7\m\r\4\k\p\s\e\5\w\2\c\5\3\u\2\r\a\y\z\r\f\q\8\1\0\2\2\e\p\6\1\z\z\r\r\f\y\c\9\w\9\d\8\7\4\s\o\o\d\g\d\a\e\1\6\8\o\l\8\2\f\p\1\4\z\1\v\4\9\x\1\s\w\d\j\x\r\l\p\m\t\0\u\g\i\d\7\7\e\i\1\y\9\n\2\b\a\d\6\0\6\i\2\h\r\o\4\m\s\8\n\7\9\b\v\7\e\g\u\h\t\c\p\y\l\8\f\a\f\6\t\p\t\3\h\0\v\6\a\7\h\m\l\0\b\f\e\3\1\j\e\m\b\3\k\0\m\u\h\o\a\e\f\1\e\m\k\0\5\y\x\a\3\q\t\h\t\t\x\f\2\1\8\b\f\s\j\x\k\n\i\b\m\n\5\s\s\7\x\t\0\t\w\6\9\g\a\5\4\w\a\h\t\2\k\8\4\x\k\e\d\3\u\6\3\5\f\b\q\q\o\5\4\g\2\6\7\d\c\8\1\b\6\f\p\v\w\3\a\q\d\4\5\y\0\o\o\j\h\x\w\9\a\y\5\l\f\m\i\s\i\d\2\g\i\k\p\e\m\l\2\t\4\i\o\v\l\a\0\g\b\9\s\t\r\v\m\a\t\z\d\2\4\7\y\2\8\f\l\b\v\c\6\9\1\9\e\y\n\b\v\y\b\5\t\6\h\8\f\q\e\5\l\f\i\u\g\q\r\z\h\i\2\e\1\t\f\s\4\p\l\i\0\f\8\1\k\n\o\c\a\m\d\8\u\r\f\j\a\g\2\d\m\w\a\u\w\9\m\o\l\8\3\0\j\k\3\x\b\h\4\f\c\p\7\n\r\9\b\h\n\9\q\v\7\3\3\f\p\p\8\e\a\v\y\t\w\y\1\r\i\x\6\a\i\r\8\0\c\v\1\h\q\8\q\3\u\g\j\k\d\d\d\2\f\j\z\2\l\f\e\6\g\2\f\f\7\5\s\m\z\g\o\m\k\5\y\w\w\s\p\6\2\e\s\8\d\t\t\m\c\a\1\z\r\v\l\b\s\p\s\8\u\1\4\n\o\g\e\6\q\b\o\u\z\c\p\2\k\c\z\6\z\w\l\u\v\h\4\y\b\4\7\2\s\t\d\0\0\l\5\8\8\h\i\8\4\j\2\d\z\a\6\v\a\y\k\o\a\w\q\j\j\o\l\u\3\c\8\u\s\z\t\v\i\w\o\7\l\0\7\1\9\x\n\i\6\0\j\d\2\u\s\5\s\q\f\j\3\1\v\u\2\b\r\5\5\3\u\9\7\i\j\c\y\0\z\a\v\n\u\p\k\t\9\f\m\q\4\e\t\8\e\r\r\r\n\g\o\p\h\g\s\p\u\g\d\p\w\q\0\l\n\f\t\z\y\h\e\j\i\d\k\3\5\j\h\h\h\0\n\s\0\i\2\7\q\a\o\e\c\0\q\a\s\f\h\w\4\c\x\r\8\m\w\q\2\r\w\p\l\z\8\x\h\3\2\2\f\9\h\c\q\k\f\2\b\x\c\a\x\e\m\1\n\i\8\3\m\m\v\s\m\l\t\1\a\y\g\f\x\4\y\p\6\e\8\1\i\i\o\7\3\1\t\f\2\a\s\0\2\c\o\1\j\v\3\6\q\j\b\z\4\y\c\6\o\8\0\1\q\c\4\a\e\5\7\m\b\1\u\v\w\x\s\f\z\j\8\l\6\1\v\d\m\h\a\5\2\5\d\2\5\h\e\6\r\w\0\2\y\2\e\4\h\x\g\j\4\c\6\r\i\4\l\0\2\j\s\2\5\v\m\h\1\m\t\c\z\s\y\u\q\t\p\z\l\3\i\q\v\z\k\o\n\p\4\d\s\i\p\p\b\3\3\o\2\u\l\s\t\p\6\3\w\l\p\d\v\m\s\1\a\s\a\0\7\c\d\r\z\q\i\u\u\y\9\s\3\6\3\n\5\9\e\4\q\6\u\4\n\g\5\d\8\w\s\4\d\b\l\j\4\1\c\e\p\j\7\r\p\w\l\z\4\q\0\9\o\y\d\b\i\f\9\o\w\g\l\y\o\v\r\i\x\1\m\i\e\n\3\d\q\s\3\4\8\p\g\p\o\8\r\6\2\9\z\t\h\w\0\q\z\w\u\u\k\s\g\9\d\g\l\d\j\n\3\o\r\r\d\z\a\i\z\z\d\q\q\c\8\h\s\2\f\m\0\j\n\3\o\r\k\k\m\y\0\m\u\2\y\v\0\t\7\t\i\2\j\7\d\4\i\g\e\l\l\a\i\a\z\9\6\w\e\4\j\k\p\h\u\b\c\9\j\3\q\b\y\g\j\2\x\p\p\w\i\a\n\u\z\q\g\7\i\h\i\6\9\b\o\6\h\u\r\8\m\w\d\l\p\3\c\m\a\b\m\6\9\q\e\f\s\e\j\3\p\f\j\e\j\8\o\9\2\8\6\5\r\r\m\7\b\q\l\5\6\r\3\d\l\b\n\i\0\y\e\e\w\0\x\1\e\1\c\i\3\9\1\a\c\r\v\0\m\6\x\v\q\d\7\a\7\2\p\3\l\t\a\9\m\v\b\4\i\9\j\s\w\x\b\3\x\i\a\y\c\v\b\3\r\n\7\9\4\m\1\k\r\1\d\t\p\h\t\i\u\2\3\b\f\s\p\a\b\g\c\a\f\j\o\b\8\r\s\1\y\0\j\x\g\m\e\q\5\6\t\g\w\a\c\f\p\n\b\u\w\k\b\8\r\1\0\e\a\l\2\z\m\l\6\2\a\c\p\n\x\m\u\e\s\v\e\j\g\6\2\l\s\4\d\v\a\0\j\j\2\u\a\8\f\h\d\h\f\9\a\z\7\v\2\d\a\v\8\b\u\c\6\u\j\q\r\5\n\r\d\l\6\y\1\b\7\p\a\r\h\e\5\n\u\1\a\1\i\m\t\v\i\f\0\s\9\1\5\n\r\e\4\u\o\p\u\5\v\k\7\c\n\9\i\f\k\o\v\d\f\8\h\b\0\o\8\i\x\a\g\r\h\h\e\a\a\w\s\6\z\w\b\y\1\3\4\7\o\o\r\4\2\r\1\f\h\5\x\g\p\u\v\r\e\7\f\z\6\b\9\f\7\a\8\y\k\y\z\u\e\w\s\g\j\v\2\2\t\k\l\s\b\1\b\6\c\m\l\t\f\x\t\e\j\r\a\p\m\k\4\5\7\y\7\l\k\0\8\d\2\n\s\l\4\w\i\d\l\g\s\w\s\o\q\8\d\i\l\n\h\l\i\0\w\4\f\6\h\l\p\1\m\w\s\9\p\8\7\o\c\l\f\p\y\b\q\0\5\j\n\p\2\z\w\b\x\u\y\c\u\9\r\6\3\0\v\e\p\r\a\g\y\r\x\t\b\7\u\v\u\v\2\j\6\1\s\x\n\c\t\v\n\g\l\2\e\7\0\9\d\z\i\3\v\4\m\2\p\6\p\w\i\n\0\u\z\y\s\8\m\d\9\p\u\r\q\5\6\2\y\e\8\m\7\7\4\3\c\x\f\k\d\y\u\r\s\h\i\l\7\9\6\q\o\4\l\2\2\n\k\3\o\9\j\c\7\2\r\8\f\r\2\3\l\o\s\e\s\4\5\9\n\z\v\p\q\d\f\3\v\i\v\y\7\6\y\g\y\7\r\c\k\7\r\w\f\w\a\u\j\1\l\x\l\2\e\n\2\9\6\r\2\2\g\3\i\b\l\8\z\p\h\6\1\t\e\l\5\n\6\r\v\d\a\8\l\b\j\j\w\2\b\c\n\v\3\c\v\1\a\4\4\8\8\4\o\w\5\n\5\m\8\d\r\d\q\h\z\y\r\h\4\7\n\1\q\g\9\8\z\l\4\0\7\h\q\k\8\y\3\e\p\h\y\7\u\y\8\d\2\7\l\n\z\c\x\z\f\v\s\z\1\j\5\d\j\h\t\y\r\3\2\v\l\n\c\4\h\p\2\o\o\x\a\m\q\u\a\h\3\7\l\8\k\i\x\b\0\z\0\3\6\o\t\q\r\s\l\u\e\x\3\u\7\w\t\5\2\j\4\g\o\e\3\0\c\0\s\q\v\h\i\j\e\p\8\u\4\n\9\w\f\g\9\b\m\0\n\6\s\d\7\7\4\1\e\v\r\c\e\8\c\j\f\i\r\y\e\n\9\a\p\w\6\k\f\k\m\n\v\9\2\8\v\w\2\2\l\n\c\k\h\q\d\f\d\n\9\2\y\9\t\r\l\f\a\h\2\d\8\l\j\x\m\1\x\q\6\x\f\d\g\j\a\p\s\e\l\3\3\s\4\y\l\e\d\k\t\4\7\r\a\j\b\l\6\i\w\s\u\7\x\s\c\b\q\j\m\0\t\m\o\y\8\h\k\j\s\c\5\e\3\o\1\8\u\a\i\o\7\l\z\x\n\c\a\p\r\u\y\5\c\x\2\p\h\n\o\h\j\7\6\q\m\8\4\l\4\i\o\i\y\p\f\u\q\6\9\c\1\2\i\h\6\x\4\h\i\w\p\0\2\b\v\0\k\o\b\3\r\4\r\3\u\d\w\h\2\4\u\r\t\h\x\6\k\q\6\j\m\0\y\z\1\v\g\2\b\f\5\0\z\b\k\i\1\6\r\7\h\5\m\o\2\c\8\2\2\b\v\w\q\f\m\d\o\9\7\r\n\3\9\v\c\v\p\q\d\1\7\z\b\5\2\s\2\n\v\0\u\t\5\r\f\w\7\a\6\g\j\t\1\3\d\f\e\z\m\n\u\f\0\s\1\t\e\t\d\i\t\j\c\e\4\j\j\l\e\l\y\9\a\z\i\6\j\5\0\g\p\i\6\4\g\8\a\z\4\7\6\4\8\n\g\0\j\d\p\5\0\c\v\7\e\v\7\e\e\j\o\w\g\3\p\h\u\0\i\k\g\n\6\v\a\s\7\q\5\v\6\t\c\a\u\d\j\3\a\y\8\w\f\k\e\a\o\e\x\i\y\i\h\n\e\s\m\7\0\t\v\e\2\c\v\y\7\x\0\2\r\l\2\3\5\n\h\9\b\c\w\q\m\m\4\8\x\o\j\m\z\s\u\u\7\f\b\l\s\v\5\3\1\d\x\q\c\n\a\j\8\2\4\t\z\i\s\j\z\f\5\n\1\q\e\q\4\k\i\l\z\r\p\w\w\e\f\k\u\4\i\u\8\1\5\x\n\i\f\9\y\m\p\j\8\d\t\5\e\c\i\5\7\l\7\z\9\o\6\k\7\m\7\h\i\g\d\i\q\o\y\t\4\y\1\m\r\o\r\j\g\n\w\j\h\5\h\4\l\8\z\d\4\0\o\1\a\5\o\a\o\z\n\a\m\d\g\c\o\r\m\e\h\t\k\n\y\f\8\q\g\d\8\u\9\2\g\n\q\3\a\b\e\e\n\n\c\t\b\u\4\o\m\7\w\2\5\g\e\s\5\r\g\p\q\y\h\2\e\7\6\g\2\e\q\i\n\y\j\q\f\z\r\3\0\9\j\l\t\g\t\e\w\h\h\x\v\1\3\k\m\u\s\i\s\q\q\1\o\9\6\y\3\m\t\g\p\o\6\4\k\u\4\g\t\m\1\6\n\w\s\e\u\g\8\3\7\u\e\p\z\a\8\0\n\e\f\b\c\m\5\7\r\0\j\c\j\2\h\n\5\m\4\2\e\n\z\e\v\x\6\l\p\e\v\d\b\n\v\7\l\5\j\r\j\u\h\t\m\b\n\1\p\3\n\e\h\i\m\w\3\t\d\7\g\w\j\w\7\i\k\p\j\k\9\d\e\6\h\f\j\l\n\b\e\i\m\3\u\e\i\t\6\s\m\o\f\j\d\8\9\s\z\x\p\p\i\d\6\8\j\n\i\e\k\i\i\x\3\w\p\s\p\f\h\k\6\f\5\a\4\3\q\n\v\8\q\3\5\o\s\a\f\v\j\x\u\3\8\1\f\1\e\e\b\9\j\w\k\4\n\v\l\2\x\t\2\z\f\x\2\y\b\2\j\9\f\h\o\3\f\t\0\q\z\3\o\x\o\j\s\4\l\9\i\m\i\n\4\w\t\6\m\i\0\9\z\l\f\3\d\c\d\j\q\z\u\2\8\b\6\6\y\9\o\c\c\1\8\x\v\j\t\i\i\o\r\e\8\n\d\2\9\0\u\e\h\j\q\f\n\k\0\v\5\n\e\y\9\9\s\m\a\r\r\f\8\z\p\g\r\g\v\z\l\w\s\t\z\2\l\j\l\i\y\g\2\m\r\a\k\8\n\r\5\a\8\8\n\s\d\n\g\x\o\7\i\9\i\l\l\6\c\o\4\a\s\d\5\g\m\j\s\w\z\h\b\j\f\j\u\d\x\1\1\e\f\l\h\c\g\l\q\6\f\m\a\q\s\e\i\w\h\u\b\a\r\o\3\6\o\s\l\3\v\z\d\o\u\7\j\3\6\3\o\4\q\p\f\6\t\2\6\5\9\7\2\o\u\4\n\q\r\5\y\z\6\1\u\1\b\8\c\r\e\d\a\s\2\g\t\g\9\8\r\n\d\6\h\2\r\b\y\t\u\5\n\4\h\3\v\i\v\n\i\v\m\9\9\i\v\q\w\h\d\k\y\d\r\c\p\t\0\p\g\1\m\a\x\7\u\8\s\g\c\q\l\0\x\i\p\n\r\7\w\l\y\x\w\x\4\e\d\5\w\j\6\p\d\l\n\0\k\o\5\f\a\5\s\q\u\s\a\q\k\4\i\8\u\x\l\u\i\m\k\5\f\b\u\p\l\7\6\4\7\e\q\h\v\6\1\v\s\x\m\p\g\v\0\d\j\9\i\k\w\2\s\j\h\i\n\e\d\w\n\e\b\9\r\p\h\y\i\2\j\2\3\n\c\z\c\l\1\v\3\m\i\t\j\q\b\4\7\b\j\5\w\4\q\5\v\g\d\p\o\p\l\x\3\m\t\d\0\l\n\x\y\9\a\1\b\d\9\7\m\o\i\h\s\4\a\5\8\v\9\x\y\f\q\e\q\2\f\o\8\o\0\4\s\9\b\t\b\2\7\d\e\w\f\a\3\v\3\x\q\0\0\g\v\1\t\4\u\7\d\z\g\w\d\l\u\8\f\w\l\d\r\z\w\w\9\q\s\6\w\3\u\p\v\z\1\n\s\4\m\u\7\s\g\q\p\3\6\w\j\0\a\9\b\l\v\j\s\8\u\8\4\p\2\u\v\l\v\s\4\i\g\3\4\f\s\8\s\9\q\r\h\z\x\n\4\0\r\m\6\l\p\p\s\z\h\n\c\k\6\x\a\r\i\r\l\9\m\h\4\6\k\8\b\w\1\8\6\9\6\v\2\8\o\k\b\d\w\c\z\t\5\8\r\o\j\v\h\7\7\4\3\i\5\l\o\2\o\r\0\h\n\a\o\5\m\n\o\z\u\e\i\r\4\o\4\7\f\v\t\a\s\1\g\i\g\7\j\7\h\r\r\t\j\i\s\r\b\f\x\7\j\j\s\w\c\f\w\7\j\h\v\w\b\5\3\4\m\k\f\o\v\c\3\h\7\7\x\0\d\m\3\0\8\0\o\g\t\e\l\4\s\w\a\0\p\t\6\2\h\n\2\n\c\b\x\n\x\5\b\t\b\h\f\r\l\0\f\y\y\d\x\u\a\n\f\q\z\w\t\z\l\l\n\g\l\w\6\p\x\m\s\9\w\x\o\9\8\i\m\z\d\g\f\g\u\8\h\f\a\g\6\i\r\x\y\q\i\y\v\h\e\3\d\e\0\u\m\g\w\4\w\o\n\w\4\9\o\k\p\e\w\r\6\6\e\n\4\p\7\t\w\m\c\5\k\z\y\2\f\x\6\f\k\k\7\z\e\w\8\l\6\j\k\b\j\6\a\t\k\c\2\m\i\1\5\d\x\7\5\u\6\v\j\q\u\r\b\4\5\e\9\3\7\l\8\y\h\i\c\t\w\y\b\o\0\h\x\s\2\o\m\t\x\n\f\r\k\n\k\j\0\1\g\v\u\a\g\y\d\s\2\o\v\a\u\j\e\b\0\u\o\d\n\x\0\h\s\d\f\t\0\p\f\e\h\j\w\m\o\o\l\1\u\p\n\i\u\e\k\4\q\u\s\1\s\k\y\2\u\c\8\d\2\y\k\2\c\p\a\8\2\a\3\i\q\f\6\c\n\e\3\m\y\k\l\x\v\3\2\p\3\f\1\e\d\k\x\a\i\7\2\v\w\b\l\0\i\s\f\o\o\l\a\u\n\2\x\j\r\x\5\b\0\f\g\b\0\1\8\t\g\v\6\0\2\s\3\f\f\w\r\5\c\9\i\5\w\7\g\h\c\i\j\p\h\g\b\k\w\k\d\5\o\v\u\2\0\b\6\q\9\c\1\5\s\i\2\1\b\z\5\7\n\w\g\q\r\e\z\g\x\n\2\7\o\l\9\l\i\b\4\b\b\k\2\h\8\g\y\9\3\a\z\3\w\5\n\3\d\i\5\w\p\q\v\n\l\p\1\z\v\k\5\d\w\w\b\m\f\z\3\5\h\z\l\5\m\b\p\o\f\1\q\5\0\e\h\0\7\0\q\n\9\y\p\v\e\3\1\a\q\d\l\n\6\y\3\k\i\s\w\5\9\g\j\k\w\c\o\w\0\j\6\g\l\t\i\k\f\p\k\f\4\y\y\o\3\1\m\w\b\y\6\k\8\d\c\l\6\v\w\f\g\j\l\p\1\h\5\1\k\d\u\c\t\c\6\y\0\z\j\f\z\p\j\u\m\s\f\v\f\p\q\f\0\2\e\5\7\y\t\x\s\5\q\9\j\m\v\m\a\d\i\e\4\q\g\i\k\w\s\v\x\a\b\p\2\s\u\8\u\i\8\s\h\i\v\q\q\f\z\j\3\1\q\n\n\6\3\y\z\y\e\1\3\h\t\m\w\e\t\n\f\a\t\7\n\z\6\9\w\t\k\8\d\o\i\i\u ]] 00:23:25.162 00:23:25.162 real 0m3.449s 00:23:25.162 user 0m2.862s 00:23:25.162 sys 0m0.469s 00:23:25.162 21:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.162 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:23:25.162 ************************************ 00:23:25.162 END TEST dd_rw_offset 00:23:25.162 ************************************ 00:23:25.162 21:05:53 -- dd/basic_rw.sh@1 -- # cleanup 00:23:25.162 21:05:53 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:23:25.162 21:05:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:23:25.162 21:05:53 -- dd/common.sh@11 -- # local nvme_ref= 00:23:25.162 21:05:53 -- dd/common.sh@12 -- # local size=0xffff 00:23:25.162 21:05:53 -- dd/common.sh@14 -- # local bs=1048576 00:23:25.162 21:05:53 -- dd/common.sh@15 -- # local count=1 00:23:25.162 21:05:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:23:25.162 21:05:53 -- dd/common.sh@18 -- # gen_conf 00:23:25.162 21:05:53 -- dd/common.sh@31 -- # xtrace_disable 00:23:25.162 21:05:53 -- common/autotest_common.sh@10 -- # set +x 00:23:25.421 { 00:23:25.421 "subsystems": [ 00:23:25.421 { 00:23:25.421 "subsystem": "bdev", 00:23:25.421 "config": [ 00:23:25.421 { 00:23:25.421 "params": { 00:23:25.421 "trtype": "pcie", 00:23:25.421 "traddr": "0000:00:06.0", 00:23:25.421 "name": "Nvme0" 00:23:25.421 }, 00:23:25.421 "method": "bdev_nvme_attach_controller" 00:23:25.421 }, 00:23:25.421 { 00:23:25.421 "method": "bdev_wait_for_examine" 00:23:25.421 } 00:23:25.421 ] 00:23:25.421 } 00:23:25.421 ] 00:23:25.421 } 00:23:25.421 [2024-06-09 21:05:53.378696] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:25.421 [2024-06-09 21:05:53.379273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128036 ] 00:23:25.421 [2024-06-09 21:05:53.545324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.679 [2024-06-09 21:05:53.722863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.873  Copying: 1024/1024 [kB] (average 500 MBps) 00:23:26.873 00:23:27.133 21:05:55 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:27.133 00:23:27.133 real 0m40.457s 00:23:27.133 user 0m33.359s 00:23:27.133 sys 0m5.496s 00:23:27.133 21:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:27.133 ************************************ 00:23:27.133 END TEST spdk_dd_basic_rw 00:23:27.133 ************************************ 00:23:27.133 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:27.133 21:05:55 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:23:27.133 21:05:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:27.133 21:05:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:27.133 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:27.133 ************************************ 00:23:27.133 START TEST spdk_dd_posix 00:23:27.133 ************************************ 00:23:27.133 21:05:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:23:27.133 * Looking for test storage... 00:23:27.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:23:27.133 21:05:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.133 21:05:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.133 21:05:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.133 21:05:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.133 21:05:55 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:27.133 21:05:55 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:27.133 21:05:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:27.133 21:05:55 -- paths/export.sh@5 -- # export PATH 00:23:27.133 21:05:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:27.133 21:05:55 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:23:27.133 21:05:55 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:23:27.133 21:05:55 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:23:27.133 21:05:55 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:23:27.133 21:05:55 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:27.133 21:05:55 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:27.133 21:05:55 -- dd/posix.sh@130 -- # tests 00:23:27.133 21:05:55 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:23:27.133 * First test run, using AIO 00:23:27.133 21:05:55 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:23:27.133 21:05:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:27.133 21:05:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:27.133 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:27.133 ************************************ 00:23:27.133 START TEST dd_flag_append 00:23:27.133 ************************************ 00:23:27.133 21:05:55 -- common/autotest_common.sh@1104 -- # append 00:23:27.133 21:05:55 -- dd/posix.sh@16 -- # local dump0 00:23:27.133 21:05:55 -- dd/posix.sh@17 -- # local dump1 00:23:27.133 21:05:55 -- dd/posix.sh@19 -- # gen_bytes 32 00:23:27.133 21:05:55 -- dd/common.sh@98 -- # xtrace_disable 00:23:27.133 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:27.133 21:05:55 -- dd/posix.sh@19 -- # dump0=oimjnnr8h5q7pmzwy6ubs2rnk7dv6kqb 00:23:27.133 21:05:55 -- dd/posix.sh@20 -- # gen_bytes 32 00:23:27.133 21:05:55 -- dd/common.sh@98 -- # xtrace_disable 00:23:27.133 21:05:55 -- common/autotest_common.sh@10 -- # set +x 00:23:27.133 21:05:55 -- dd/posix.sh@20 -- # dump1=dbqxbp8d8e0qiyu8b2quq3b1uawpyi5g 00:23:27.133 21:05:55 -- dd/posix.sh@22 -- # printf %s oimjnnr8h5q7pmzwy6ubs2rnk7dv6kqb 00:23:27.133 21:05:55 -- dd/posix.sh@23 -- # printf %s dbqxbp8d8e0qiyu8b2quq3b1uawpyi5g 00:23:27.133 21:05:55 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:23:27.133 [2024-06-09 21:05:55.252233] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:27.133 [2024-06-09 21:05:55.252588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128120 ] 00:23:27.391 [2024-06-09 21:05:55.403261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.649 [2024-06-09 21:05:55.586350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.842  Copying: 32/32 [B] (average 31 kBps) 00:23:28.842 00:23:28.842 21:05:56 -- dd/posix.sh@27 -- # [[ dbqxbp8d8e0qiyu8b2quq3b1uawpyi5goimjnnr8h5q7pmzwy6ubs2rnk7dv6kqb == \d\b\q\x\b\p\8\d\8\e\0\q\i\y\u\8\b\2\q\u\q\3\b\1\u\a\w\p\y\i\5\g\o\i\m\j\n\n\r\8\h\5\q\7\p\m\z\w\y\6\u\b\s\2\r\n\k\7\d\v\6\k\q\b ]] 00:23:28.842 00:23:28.842 real 0m1.677s 00:23:28.842 user 0m1.312s 00:23:28.842 sys 0m0.216s 00:23:28.842 ************************************ 00:23:28.842 END TEST dd_flag_append 00:23:28.842 ************************************ 00:23:28.842 21:05:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.842 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:23:28.842 21:05:56 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:23:28.843 21:05:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:28.843 21:05:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:28.843 21:05:56 -- common/autotest_common.sh@10 -- # set +x 00:23:28.843 ************************************ 00:23:28.843 START TEST dd_flag_directory 00:23:28.843 ************************************ 00:23:28.843 21:05:56 -- common/autotest_common.sh@1104 -- # directory 00:23:28.843 21:05:56 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:28.843 21:05:56 -- common/autotest_common.sh@640 -- # local es=0 00:23:28.843 21:05:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:28.843 21:05:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:28.843 21:05:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:28.843 21:05:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:28.843 21:05:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:28.843 21:05:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:28.843 21:05:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:28.843 21:05:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:28.843 21:05:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:28.843 21:05:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:28.843 [2024-06-09 21:05:56.974069] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:28.843 [2024-06-09 21:05:56.974428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128167 ] 00:23:29.101 [2024-06-09 21:05:57.119744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.360 [2024-06-09 21:05:57.293901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.618 [2024-06-09 21:05:57.558894] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:29.618 [2024-06-09 21:05:57.559176] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:29.618 [2024-06-09 21:05:57.559241] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:30.185 [2024-06-09 21:05:58.178879] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:30.444 21:05:58 -- common/autotest_common.sh@643 -- # es=236 00:23:30.444 21:05:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:30.444 21:05:58 -- common/autotest_common.sh@652 -- # es=108 00:23:30.444 21:05:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:30.444 21:05:58 -- common/autotest_common.sh@660 -- # es=1 00:23:30.444 21:05:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:30.444 21:05:58 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:30.444 21:05:58 -- common/autotest_common.sh@640 -- # local es=0 00:23:30.444 21:05:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:30.444 21:05:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:30.444 21:05:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:30.444 21:05:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:30.444 21:05:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:30.444 21:05:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:30.444 21:05:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:30.444 21:05:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:30.444 21:05:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:30.444 21:05:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:30.444 [2024-06-09 21:05:58.576599] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:30.444 [2024-06-09 21:05:58.576949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128188 ] 00:23:30.703 [2024-06-09 21:05:58.732774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.961 [2024-06-09 21:05:58.901162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.220 [2024-06-09 21:05:59.151822] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:31.220 [2024-06-09 21:05:59.152126] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:31.220 [2024-06-09 21:05:59.152206] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:31.790 [2024-06-09 21:05:59.737412] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:32.049 ************************************ 00:23:32.049 END TEST dd_flag_directory 00:23:32.049 ************************************ 00:23:32.049 21:06:00 -- common/autotest_common.sh@643 -- # es=236 00:23:32.049 21:06:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:32.049 21:06:00 -- common/autotest_common.sh@652 -- # es=108 00:23:32.049 21:06:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:32.049 21:06:00 -- common/autotest_common.sh@660 -- # es=1 00:23:32.049 21:06:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:32.049 00:23:32.049 real 0m3.175s 00:23:32.049 user 0m2.547s 00:23:32.049 sys 0m0.422s 00:23:32.049 21:06:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.049 21:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:32.049 21:06:00 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:23:32.049 21:06:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:32.049 21:06:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:32.049 21:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:32.049 ************************************ 00:23:32.049 START TEST dd_flag_nofollow 00:23:32.049 ************************************ 00:23:32.049 21:06:00 -- common/autotest_common.sh@1104 -- # nofollow 00:23:32.049 21:06:00 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:32.049 21:06:00 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:32.049 21:06:00 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:32.049 21:06:00 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:32.049 21:06:00 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:32.049 21:06:00 -- common/autotest_common.sh@640 -- # local es=0 00:23:32.049 21:06:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:32.049 21:06:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.049 21:06:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:32.049 21:06:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.049 21:06:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:32.049 21:06:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.049 21:06:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:32.049 21:06:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.049 21:06:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:32.049 21:06:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:32.049 [2024-06-09 21:06:00.202237] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:32.049 [2024-06-09 21:06:00.202536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128238 ] 00:23:32.308 [2024-06-09 21:06:00.357017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.567 [2024-06-09 21:06:00.541076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.826 [2024-06-09 21:06:00.802553] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:23:32.826 [2024-06-09 21:06:00.802846] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:23:32.826 [2024-06-09 21:06:00.802912] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:33.394 [2024-06-09 21:06:01.413871] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:33.653 21:06:01 -- common/autotest_common.sh@643 -- # es=216 00:23:33.653 21:06:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:33.653 21:06:01 -- common/autotest_common.sh@652 -- # es=88 00:23:33.653 21:06:01 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:33.653 21:06:01 -- common/autotest_common.sh@660 -- # es=1 00:23:33.653 21:06:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:33.653 21:06:01 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:33.653 21:06:01 -- common/autotest_common.sh@640 -- # local es=0 00:23:33.653 21:06:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:33.653 21:06:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:33.653 21:06:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:33.653 21:06:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:33.653 21:06:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:33.653 21:06:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:33.653 21:06:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:33.653 21:06:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:33.653 21:06:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:33.653 21:06:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:23:33.653 [2024-06-09 21:06:01.822229] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:33.653 [2024-06-09 21:06:01.822542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128261 ] 00:23:33.912 [2024-06-09 21:06:01.974382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.171 [2024-06-09 21:06:02.130491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.429 [2024-06-09 21:06:02.383987] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:23:34.429 [2024-06-09 21:06:02.384259] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:23:34.429 [2024-06-09 21:06:02.384337] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:34.996 [2024-06-09 21:06:02.984854] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:35.254 21:06:03 -- common/autotest_common.sh@643 -- # es=216 00:23:35.254 21:06:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:35.254 21:06:03 -- common/autotest_common.sh@652 -- # es=88 00:23:35.254 21:06:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:35.254 21:06:03 -- common/autotest_common.sh@660 -- # es=1 00:23:35.254 21:06:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:35.254 21:06:03 -- dd/posix.sh@46 -- # gen_bytes 512 00:23:35.254 21:06:03 -- dd/common.sh@98 -- # xtrace_disable 00:23:35.254 21:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:35.254 21:06:03 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:35.254 [2024-06-09 21:06:03.390953] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:35.254 [2024-06-09 21:06:03.391351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128283 ] 00:23:35.512 [2024-06-09 21:06:03.539311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.771 [2024-06-09 21:06:03.717957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.963  Copying: 512/512 [B] (average 500 kBps) 00:23:36.963 00:23:36.963 21:06:04 -- dd/posix.sh@49 -- # [[ z5yqnvcyexgbp3vtk09w01gxydxao70h36qjjlpuyumxzmrfqqekxyh61i4den50l605qyt13yiuo2x6hp2ssrsmla6b39vg37lz7ypatr7rbl1nnmjb7ck8ivmc0i84syaa9ljfjl19vcenfvhlllht18lifdg7gg8mvzui0hr6oxfhgzoc10nskwk5nx7xdt0d6tjoxhsggq435ambsxv1vxkjytg5kh8xtkb14z61luygfl9ob8yzjrmqg7140hhrvspuhk6qu5mo4xxfzkbwwwt901su43q4ydt5c7ocdfo7yqwrno2nw0rlhyjatq37sddz6bwrni56eurzn4yh6vq7ryd0kbbswtu04grkrd2h71sqe8j5he2e37op0x1x4f262z4e1hizhj1hwiao0vzl2zx7n123sphyqvnm8fr1odr4nvsjzhr8oga5f6swsftw7uz4lp35yma0vn9wafucqiyr8y64gwes0c6mtmpujardss2gbs2rfxda == \z\5\y\q\n\v\c\y\e\x\g\b\p\3\v\t\k\0\9\w\0\1\g\x\y\d\x\a\o\7\0\h\3\6\q\j\j\l\p\u\y\u\m\x\z\m\r\f\q\q\e\k\x\y\h\6\1\i\4\d\e\n\5\0\l\6\0\5\q\y\t\1\3\y\i\u\o\2\x\6\h\p\2\s\s\r\s\m\l\a\6\b\3\9\v\g\3\7\l\z\7\y\p\a\t\r\7\r\b\l\1\n\n\m\j\b\7\c\k\8\i\v\m\c\0\i\8\4\s\y\a\a\9\l\j\f\j\l\1\9\v\c\e\n\f\v\h\l\l\l\h\t\1\8\l\i\f\d\g\7\g\g\8\m\v\z\u\i\0\h\r\6\o\x\f\h\g\z\o\c\1\0\n\s\k\w\k\5\n\x\7\x\d\t\0\d\6\t\j\o\x\h\s\g\g\q\4\3\5\a\m\b\s\x\v\1\v\x\k\j\y\t\g\5\k\h\8\x\t\k\b\1\4\z\6\1\l\u\y\g\f\l\9\o\b\8\y\z\j\r\m\q\g\7\1\4\0\h\h\r\v\s\p\u\h\k\6\q\u\5\m\o\4\x\x\f\z\k\b\w\w\w\t\9\0\1\s\u\4\3\q\4\y\d\t\5\c\7\o\c\d\f\o\7\y\q\w\r\n\o\2\n\w\0\r\l\h\y\j\a\t\q\3\7\s\d\d\z\6\b\w\r\n\i\5\6\e\u\r\z\n\4\y\h\6\v\q\7\r\y\d\0\k\b\b\s\w\t\u\0\4\g\r\k\r\d\2\h\7\1\s\q\e\8\j\5\h\e\2\e\3\7\o\p\0\x\1\x\4\f\2\6\2\z\4\e\1\h\i\z\h\j\1\h\w\i\a\o\0\v\z\l\2\z\x\7\n\1\2\3\s\p\h\y\q\v\n\m\8\f\r\1\o\d\r\4\n\v\s\j\z\h\r\8\o\g\a\5\f\6\s\w\s\f\t\w\7\u\z\4\l\p\3\5\y\m\a\0\v\n\9\w\a\f\u\c\q\i\y\r\8\y\6\4\g\w\e\s\0\c\6\m\t\m\p\u\j\a\r\d\s\s\2\g\b\s\2\r\f\x\d\a ]] 00:23:36.963 00:23:36.963 real 0m4.807s 00:23:36.963 user 0m3.861s 00:23:36.963 sys 0m0.610s 00:23:36.963 21:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.963 ************************************ 00:23:36.963 END TEST dd_flag_nofollow 00:23:36.963 ************************************ 00:23:36.963 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:23:36.963 21:06:04 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:23:36.963 21:06:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:36.963 21:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:36.963 21:06:04 -- common/autotest_common.sh@10 -- # set +x 00:23:36.963 ************************************ 00:23:36.963 START TEST dd_flag_noatime 00:23:36.963 ************************************ 00:23:36.963 21:06:05 -- common/autotest_common.sh@1104 -- # noatime 00:23:36.964 21:06:05 -- dd/posix.sh@53 -- # local atime_if 00:23:36.964 21:06:05 -- dd/posix.sh@54 -- # local atime_of 00:23:36.964 21:06:05 -- dd/posix.sh@58 -- # gen_bytes 512 00:23:36.964 21:06:05 -- dd/common.sh@98 -- # xtrace_disable 00:23:36.964 21:06:05 -- common/autotest_common.sh@10 -- # set +x 00:23:36.964 21:06:05 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:36.964 21:06:05 -- dd/posix.sh@60 -- # atime_if=1717967163 00:23:36.964 21:06:05 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:36.964 21:06:05 -- dd/posix.sh@61 -- # atime_of=1717967164 00:23:36.964 21:06:05 -- dd/posix.sh@66 -- # sleep 1 00:23:37.899 21:06:06 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:38.158 [2024-06-09 21:06:06.089141] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:38.158 [2024-06-09 21:06:06.089354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128351 ] 00:23:38.158 [2024-06-09 21:06:06.255853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.415 [2024-06-09 21:06:06.410899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.603  Copying: 512/512 [B] (average 500 kBps) 00:23:39.603 00:23:39.603 21:06:07 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:39.603 21:06:07 -- dd/posix.sh@69 -- # (( atime_if == 1717967163 )) 00:23:39.603 21:06:07 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:39.603 21:06:07 -- dd/posix.sh@70 -- # (( atime_of == 1717967164 )) 00:23:39.603 21:06:07 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:39.603 [2024-06-09 21:06:07.718651] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:39.603 [2024-06-09 21:06:07.718886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128371 ] 00:23:39.862 [2024-06-09 21:06:07.888474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.120 [2024-06-09 21:06:08.058600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.313  Copying: 512/512 [B] (average 500 kBps) 00:23:41.313 00:23:41.313 21:06:09 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:41.313 21:06:09 -- dd/posix.sh@73 -- # (( atime_if < 1717967168 )) 00:23:41.313 00:23:41.313 real 0m4.316s 00:23:41.313 user 0m2.577s 00:23:41.313 sys 0m0.463s 00:23:41.313 21:06:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.313 21:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:41.313 ************************************ 00:23:41.313 END TEST dd_flag_noatime 00:23:41.313 ************************************ 00:23:41.313 21:06:09 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:23:41.313 21:06:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:41.313 21:06:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:41.313 21:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:41.313 ************************************ 00:23:41.313 START TEST dd_flags_misc 00:23:41.313 ************************************ 00:23:41.313 21:06:09 -- common/autotest_common.sh@1104 -- # io 00:23:41.313 21:06:09 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:23:41.313 21:06:09 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:23:41.313 21:06:09 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:23:41.313 21:06:09 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:23:41.313 21:06:09 -- dd/posix.sh@86 -- # gen_bytes 512 00:23:41.313 21:06:09 -- dd/common.sh@98 -- # xtrace_disable 00:23:41.313 21:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:41.313 21:06:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:41.313 21:06:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:23:41.313 [2024-06-09 21:06:09.443455] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:41.313 [2024-06-09 21:06:09.443656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128414 ] 00:23:41.571 [2024-06-09 21:06:09.613389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.830 [2024-06-09 21:06:09.795727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.024  Copying: 512/512 [B] (average 250 kBps) 00:23:43.024 00:23:43.024 21:06:11 -- dd/posix.sh@93 -- # [[ c5z9u3vtpaemd6h7ukdjnwgqde5wf158vrefursegmzecz3v7cfuhuxoiwb9725ytb60ebgzv9tu59dmmzr64zdd7wxqiwj9e0yzvxurzk7efqge2ei3j4ozvpmmq3vrju3xxgf6h28p7htjtuy1fjkgfyo2d3go8k2h1fpr8fy7ng7gihfpxshwr5k2at5jyqjrurj573nprzjrdktb1ealxzyi2z4ueo8lvodvbbbxqqxbw75h0ip6dt9ck4buwsys1htjxhqscqd9viwa8eqg2cjcmptw3ixzqu4x3vx212uhovlsi5odel3kv9cih01s3y5h47qhr8tcaukx8krdmyowtqslazkb82yfsyn2oifiqbfws4p23okv69yiweaaofczghscn7pfx4ee7clhh1lz91m4n6psi3fknd87tqroivt6ysv9b2wi6zq830eoqebtbe2v84bamtvk3jp89g2gsy2tnlcq78iwolusju2yhj4hhgmhdgfxb7dn == \c\5\z\9\u\3\v\t\p\a\e\m\d\6\h\7\u\k\d\j\n\w\g\q\d\e\5\w\f\1\5\8\v\r\e\f\u\r\s\e\g\m\z\e\c\z\3\v\7\c\f\u\h\u\x\o\i\w\b\9\7\2\5\y\t\b\6\0\e\b\g\z\v\9\t\u\5\9\d\m\m\z\r\6\4\z\d\d\7\w\x\q\i\w\j\9\e\0\y\z\v\x\u\r\z\k\7\e\f\q\g\e\2\e\i\3\j\4\o\z\v\p\m\m\q\3\v\r\j\u\3\x\x\g\f\6\h\2\8\p\7\h\t\j\t\u\y\1\f\j\k\g\f\y\o\2\d\3\g\o\8\k\2\h\1\f\p\r\8\f\y\7\n\g\7\g\i\h\f\p\x\s\h\w\r\5\k\2\a\t\5\j\y\q\j\r\u\r\j\5\7\3\n\p\r\z\j\r\d\k\t\b\1\e\a\l\x\z\y\i\2\z\4\u\e\o\8\l\v\o\d\v\b\b\b\x\q\q\x\b\w\7\5\h\0\i\p\6\d\t\9\c\k\4\b\u\w\s\y\s\1\h\t\j\x\h\q\s\c\q\d\9\v\i\w\a\8\e\q\g\2\c\j\c\m\p\t\w\3\i\x\z\q\u\4\x\3\v\x\2\1\2\u\h\o\v\l\s\i\5\o\d\e\l\3\k\v\9\c\i\h\0\1\s\3\y\5\h\4\7\q\h\r\8\t\c\a\u\k\x\8\k\r\d\m\y\o\w\t\q\s\l\a\z\k\b\8\2\y\f\s\y\n\2\o\i\f\i\q\b\f\w\s\4\p\2\3\o\k\v\6\9\y\i\w\e\a\a\o\f\c\z\g\h\s\c\n\7\p\f\x\4\e\e\7\c\l\h\h\1\l\z\9\1\m\4\n\6\p\s\i\3\f\k\n\d\8\7\t\q\r\o\i\v\t\6\y\s\v\9\b\2\w\i\6\z\q\8\3\0\e\o\q\e\b\t\b\e\2\v\8\4\b\a\m\t\v\k\3\j\p\8\9\g\2\g\s\y\2\t\n\l\c\q\7\8\i\w\o\l\u\s\j\u\2\y\h\j\4\h\h\g\m\h\d\g\f\x\b\7\d\n ]] 00:23:43.024 21:06:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:43.024 21:06:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:23:43.024 [2024-06-09 21:06:11.115361] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:43.024 [2024-06-09 21:06:11.115533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128447 ] 00:23:43.283 [2024-06-09 21:06:11.264567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.283 [2024-06-09 21:06:11.443304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.916  Copying: 512/512 [B] (average 500 kBps) 00:23:44.916 00:23:44.916 21:06:12 -- dd/posix.sh@93 -- # [[ c5z9u3vtpaemd6h7ukdjnwgqde5wf158vrefursegmzecz3v7cfuhuxoiwb9725ytb60ebgzv9tu59dmmzr64zdd7wxqiwj9e0yzvxurzk7efqge2ei3j4ozvpmmq3vrju3xxgf6h28p7htjtuy1fjkgfyo2d3go8k2h1fpr8fy7ng7gihfpxshwr5k2at5jyqjrurj573nprzjrdktb1ealxzyi2z4ueo8lvodvbbbxqqxbw75h0ip6dt9ck4buwsys1htjxhqscqd9viwa8eqg2cjcmptw3ixzqu4x3vx212uhovlsi5odel3kv9cih01s3y5h47qhr8tcaukx8krdmyowtqslazkb82yfsyn2oifiqbfws4p23okv69yiweaaofczghscn7pfx4ee7clhh1lz91m4n6psi3fknd87tqroivt6ysv9b2wi6zq830eoqebtbe2v84bamtvk3jp89g2gsy2tnlcq78iwolusju2yhj4hhgmhdgfxb7dn == \c\5\z\9\u\3\v\t\p\a\e\m\d\6\h\7\u\k\d\j\n\w\g\q\d\e\5\w\f\1\5\8\v\r\e\f\u\r\s\e\g\m\z\e\c\z\3\v\7\c\f\u\h\u\x\o\i\w\b\9\7\2\5\y\t\b\6\0\e\b\g\z\v\9\t\u\5\9\d\m\m\z\r\6\4\z\d\d\7\w\x\q\i\w\j\9\e\0\y\z\v\x\u\r\z\k\7\e\f\q\g\e\2\e\i\3\j\4\o\z\v\p\m\m\q\3\v\r\j\u\3\x\x\g\f\6\h\2\8\p\7\h\t\j\t\u\y\1\f\j\k\g\f\y\o\2\d\3\g\o\8\k\2\h\1\f\p\r\8\f\y\7\n\g\7\g\i\h\f\p\x\s\h\w\r\5\k\2\a\t\5\j\y\q\j\r\u\r\j\5\7\3\n\p\r\z\j\r\d\k\t\b\1\e\a\l\x\z\y\i\2\z\4\u\e\o\8\l\v\o\d\v\b\b\b\x\q\q\x\b\w\7\5\h\0\i\p\6\d\t\9\c\k\4\b\u\w\s\y\s\1\h\t\j\x\h\q\s\c\q\d\9\v\i\w\a\8\e\q\g\2\c\j\c\m\p\t\w\3\i\x\z\q\u\4\x\3\v\x\2\1\2\u\h\o\v\l\s\i\5\o\d\e\l\3\k\v\9\c\i\h\0\1\s\3\y\5\h\4\7\q\h\r\8\t\c\a\u\k\x\8\k\r\d\m\y\o\w\t\q\s\l\a\z\k\b\8\2\y\f\s\y\n\2\o\i\f\i\q\b\f\w\s\4\p\2\3\o\k\v\6\9\y\i\w\e\a\a\o\f\c\z\g\h\s\c\n\7\p\f\x\4\e\e\7\c\l\h\h\1\l\z\9\1\m\4\n\6\p\s\i\3\f\k\n\d\8\7\t\q\r\o\i\v\t\6\y\s\v\9\b\2\w\i\6\z\q\8\3\0\e\o\q\e\b\t\b\e\2\v\8\4\b\a\m\t\v\k\3\j\p\8\9\g\2\g\s\y\2\t\n\l\c\q\7\8\i\w\o\l\u\s\j\u\2\y\h\j\4\h\h\g\m\h\d\g\f\x\b\7\d\n ]] 00:23:44.916 21:06:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:44.916 21:06:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:23:44.916 [2024-06-09 21:06:12.752264] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:44.916 [2024-06-09 21:06:12.752492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128464 ] 00:23:44.916 [2024-06-09 21:06:12.921200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.175 [2024-06-09 21:06:13.098128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.368  Copying: 512/512 [B] (average 250 kBps) 00:23:46.368 00:23:46.368 21:06:14 -- dd/posix.sh@93 -- # [[ c5z9u3vtpaemd6h7ukdjnwgqde5wf158vrefursegmzecz3v7cfuhuxoiwb9725ytb60ebgzv9tu59dmmzr64zdd7wxqiwj9e0yzvxurzk7efqge2ei3j4ozvpmmq3vrju3xxgf6h28p7htjtuy1fjkgfyo2d3go8k2h1fpr8fy7ng7gihfpxshwr5k2at5jyqjrurj573nprzjrdktb1ealxzyi2z4ueo8lvodvbbbxqqxbw75h0ip6dt9ck4buwsys1htjxhqscqd9viwa8eqg2cjcmptw3ixzqu4x3vx212uhovlsi5odel3kv9cih01s3y5h47qhr8tcaukx8krdmyowtqslazkb82yfsyn2oifiqbfws4p23okv69yiweaaofczghscn7pfx4ee7clhh1lz91m4n6psi3fknd87tqroivt6ysv9b2wi6zq830eoqebtbe2v84bamtvk3jp89g2gsy2tnlcq78iwolusju2yhj4hhgmhdgfxb7dn == \c\5\z\9\u\3\v\t\p\a\e\m\d\6\h\7\u\k\d\j\n\w\g\q\d\e\5\w\f\1\5\8\v\r\e\f\u\r\s\e\g\m\z\e\c\z\3\v\7\c\f\u\h\u\x\o\i\w\b\9\7\2\5\y\t\b\6\0\e\b\g\z\v\9\t\u\5\9\d\m\m\z\r\6\4\z\d\d\7\w\x\q\i\w\j\9\e\0\y\z\v\x\u\r\z\k\7\e\f\q\g\e\2\e\i\3\j\4\o\z\v\p\m\m\q\3\v\r\j\u\3\x\x\g\f\6\h\2\8\p\7\h\t\j\t\u\y\1\f\j\k\g\f\y\o\2\d\3\g\o\8\k\2\h\1\f\p\r\8\f\y\7\n\g\7\g\i\h\f\p\x\s\h\w\r\5\k\2\a\t\5\j\y\q\j\r\u\r\j\5\7\3\n\p\r\z\j\r\d\k\t\b\1\e\a\l\x\z\y\i\2\z\4\u\e\o\8\l\v\o\d\v\b\b\b\x\q\q\x\b\w\7\5\h\0\i\p\6\d\t\9\c\k\4\b\u\w\s\y\s\1\h\t\j\x\h\q\s\c\q\d\9\v\i\w\a\8\e\q\g\2\c\j\c\m\p\t\w\3\i\x\z\q\u\4\x\3\v\x\2\1\2\u\h\o\v\l\s\i\5\o\d\e\l\3\k\v\9\c\i\h\0\1\s\3\y\5\h\4\7\q\h\r\8\t\c\a\u\k\x\8\k\r\d\m\y\o\w\t\q\s\l\a\z\k\b\8\2\y\f\s\y\n\2\o\i\f\i\q\b\f\w\s\4\p\2\3\o\k\v\6\9\y\i\w\e\a\a\o\f\c\z\g\h\s\c\n\7\p\f\x\4\e\e\7\c\l\h\h\1\l\z\9\1\m\4\n\6\p\s\i\3\f\k\n\d\8\7\t\q\r\o\i\v\t\6\y\s\v\9\b\2\w\i\6\z\q\8\3\0\e\o\q\e\b\t\b\e\2\v\8\4\b\a\m\t\v\k\3\j\p\8\9\g\2\g\s\y\2\t\n\l\c\q\7\8\i\w\o\l\u\s\j\u\2\y\h\j\4\h\h\g\m\h\d\g\f\x\b\7\d\n ]] 00:23:46.368 21:06:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:46.368 21:06:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:23:46.368 [2024-06-09 21:06:14.446188] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:46.368 [2024-06-09 21:06:14.446406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128490 ] 00:23:46.627 [2024-06-09 21:06:14.613881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.627 [2024-06-09 21:06:14.774518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.821  Copying: 512/512 [B] (average 166 kBps) 00:23:47.821 00:23:48.081 21:06:16 -- dd/posix.sh@93 -- # [[ c5z9u3vtpaemd6h7ukdjnwgqde5wf158vrefursegmzecz3v7cfuhuxoiwb9725ytb60ebgzv9tu59dmmzr64zdd7wxqiwj9e0yzvxurzk7efqge2ei3j4ozvpmmq3vrju3xxgf6h28p7htjtuy1fjkgfyo2d3go8k2h1fpr8fy7ng7gihfpxshwr5k2at5jyqjrurj573nprzjrdktb1ealxzyi2z4ueo8lvodvbbbxqqxbw75h0ip6dt9ck4buwsys1htjxhqscqd9viwa8eqg2cjcmptw3ixzqu4x3vx212uhovlsi5odel3kv9cih01s3y5h47qhr8tcaukx8krdmyowtqslazkb82yfsyn2oifiqbfws4p23okv69yiweaaofczghscn7pfx4ee7clhh1lz91m4n6psi3fknd87tqroivt6ysv9b2wi6zq830eoqebtbe2v84bamtvk3jp89g2gsy2tnlcq78iwolusju2yhj4hhgmhdgfxb7dn == \c\5\z\9\u\3\v\t\p\a\e\m\d\6\h\7\u\k\d\j\n\w\g\q\d\e\5\w\f\1\5\8\v\r\e\f\u\r\s\e\g\m\z\e\c\z\3\v\7\c\f\u\h\u\x\o\i\w\b\9\7\2\5\y\t\b\6\0\e\b\g\z\v\9\t\u\5\9\d\m\m\z\r\6\4\z\d\d\7\w\x\q\i\w\j\9\e\0\y\z\v\x\u\r\z\k\7\e\f\q\g\e\2\e\i\3\j\4\o\z\v\p\m\m\q\3\v\r\j\u\3\x\x\g\f\6\h\2\8\p\7\h\t\j\t\u\y\1\f\j\k\g\f\y\o\2\d\3\g\o\8\k\2\h\1\f\p\r\8\f\y\7\n\g\7\g\i\h\f\p\x\s\h\w\r\5\k\2\a\t\5\j\y\q\j\r\u\r\j\5\7\3\n\p\r\z\j\r\d\k\t\b\1\e\a\l\x\z\y\i\2\z\4\u\e\o\8\l\v\o\d\v\b\b\b\x\q\q\x\b\w\7\5\h\0\i\p\6\d\t\9\c\k\4\b\u\w\s\y\s\1\h\t\j\x\h\q\s\c\q\d\9\v\i\w\a\8\e\q\g\2\c\j\c\m\p\t\w\3\i\x\z\q\u\4\x\3\v\x\2\1\2\u\h\o\v\l\s\i\5\o\d\e\l\3\k\v\9\c\i\h\0\1\s\3\y\5\h\4\7\q\h\r\8\t\c\a\u\k\x\8\k\r\d\m\y\o\w\t\q\s\l\a\z\k\b\8\2\y\f\s\y\n\2\o\i\f\i\q\b\f\w\s\4\p\2\3\o\k\v\6\9\y\i\w\e\a\a\o\f\c\z\g\h\s\c\n\7\p\f\x\4\e\e\7\c\l\h\h\1\l\z\9\1\m\4\n\6\p\s\i\3\f\k\n\d\8\7\t\q\r\o\i\v\t\6\y\s\v\9\b\2\w\i\6\z\q\8\3\0\e\o\q\e\b\t\b\e\2\v\8\4\b\a\m\t\v\k\3\j\p\8\9\g\2\g\s\y\2\t\n\l\c\q\7\8\i\w\o\l\u\s\j\u\2\y\h\j\4\h\h\g\m\h\d\g\f\x\b\7\d\n ]] 00:23:48.081 21:06:16 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:23:48.081 21:06:16 -- dd/posix.sh@86 -- # gen_bytes 512 00:23:48.081 21:06:16 -- dd/common.sh@98 -- # xtrace_disable 00:23:48.081 21:06:16 -- common/autotest_common.sh@10 -- # set +x 00:23:48.081 21:06:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:48.081 21:06:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:23:48.081 [2024-06-09 21:06:16.077491] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:48.081 [2024-06-09 21:06:16.078430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128512 ] 00:23:48.081 [2024-06-09 21:06:16.245501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.340 [2024-06-09 21:06:16.416466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.535  Copying: 512/512 [B] (average 500 kBps) 00:23:49.535 00:23:49.535 21:06:17 -- dd/posix.sh@93 -- # [[ bdg7tszsro196k6xyz7hztqzojjm7yk2yh53ke9rgzxb4oiz97ug679d3sd18qmp734to9bmx23j5sfsvgnyy03sngltq778nywh91w2h76mg5i0ln0gtlecgvaqyo2jdek02xt0ujhd93tnrsdurbe46wd20te2k19d3r3zew5bfaecnax8fuz4le5vd6m79bua32dfl6vep4rjjah8wdor6lq6bo48vhqzduwkg4sqozqgw8msu37ezw1re7yl2nl98qkpdxrlf2ocyufcct617euy2187mwo6nrmwbez84svnkgak8mv2nc6ovck7i6wcr4geyype7taszwq6i15f1n4bw5korhk2o0gck43d6h9m4nkstzbrry2t35xrltbc057zmm16b9952cblm1du6k9itrj6oundjm250ix2t31249y983rp9cw3luvaf596qpk1rfpatdzqt6uep0mwdvz8iq21x9b8fsxj6tmdpr5luh82m5pa3u6ukcrr == \b\d\g\7\t\s\z\s\r\o\1\9\6\k\6\x\y\z\7\h\z\t\q\z\o\j\j\m\7\y\k\2\y\h\5\3\k\e\9\r\g\z\x\b\4\o\i\z\9\7\u\g\6\7\9\d\3\s\d\1\8\q\m\p\7\3\4\t\o\9\b\m\x\2\3\j\5\s\f\s\v\g\n\y\y\0\3\s\n\g\l\t\q\7\7\8\n\y\w\h\9\1\w\2\h\7\6\m\g\5\i\0\l\n\0\g\t\l\e\c\g\v\a\q\y\o\2\j\d\e\k\0\2\x\t\0\u\j\h\d\9\3\t\n\r\s\d\u\r\b\e\4\6\w\d\2\0\t\e\2\k\1\9\d\3\r\3\z\e\w\5\b\f\a\e\c\n\a\x\8\f\u\z\4\l\e\5\v\d\6\m\7\9\b\u\a\3\2\d\f\l\6\v\e\p\4\r\j\j\a\h\8\w\d\o\r\6\l\q\6\b\o\4\8\v\h\q\z\d\u\w\k\g\4\s\q\o\z\q\g\w\8\m\s\u\3\7\e\z\w\1\r\e\7\y\l\2\n\l\9\8\q\k\p\d\x\r\l\f\2\o\c\y\u\f\c\c\t\6\1\7\e\u\y\2\1\8\7\m\w\o\6\n\r\m\w\b\e\z\8\4\s\v\n\k\g\a\k\8\m\v\2\n\c\6\o\v\c\k\7\i\6\w\c\r\4\g\e\y\y\p\e\7\t\a\s\z\w\q\6\i\1\5\f\1\n\4\b\w\5\k\o\r\h\k\2\o\0\g\c\k\4\3\d\6\h\9\m\4\n\k\s\t\z\b\r\r\y\2\t\3\5\x\r\l\t\b\c\0\5\7\z\m\m\1\6\b\9\9\5\2\c\b\l\m\1\d\u\6\k\9\i\t\r\j\6\o\u\n\d\j\m\2\5\0\i\x\2\t\3\1\2\4\9\y\9\8\3\r\p\9\c\w\3\l\u\v\a\f\5\9\6\q\p\k\1\r\f\p\a\t\d\z\q\t\6\u\e\p\0\m\w\d\v\z\8\i\q\2\1\x\9\b\8\f\s\x\j\6\t\m\d\p\r\5\l\u\h\8\2\m\5\p\a\3\u\6\u\k\c\r\r ]] 00:23:49.535 21:06:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:49.535 21:06:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:23:49.795 [2024-06-09 21:06:17.719188] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:49.795 [2024-06-09 21:06:17.719411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128536 ] 00:23:49.795 [2024-06-09 21:06:17.887946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.053 [2024-06-09 21:06:18.063381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.261  Copying: 512/512 [B] (average 500 kBps) 00:23:51.261 00:23:51.261 21:06:19 -- dd/posix.sh@93 -- # [[ bdg7tszsro196k6xyz7hztqzojjm7yk2yh53ke9rgzxb4oiz97ug679d3sd18qmp734to9bmx23j5sfsvgnyy03sngltq778nywh91w2h76mg5i0ln0gtlecgvaqyo2jdek02xt0ujhd93tnrsdurbe46wd20te2k19d3r3zew5bfaecnax8fuz4le5vd6m79bua32dfl6vep4rjjah8wdor6lq6bo48vhqzduwkg4sqozqgw8msu37ezw1re7yl2nl98qkpdxrlf2ocyufcct617euy2187mwo6nrmwbez84svnkgak8mv2nc6ovck7i6wcr4geyype7taszwq6i15f1n4bw5korhk2o0gck43d6h9m4nkstzbrry2t35xrltbc057zmm16b9952cblm1du6k9itrj6oundjm250ix2t31249y983rp9cw3luvaf596qpk1rfpatdzqt6uep0mwdvz8iq21x9b8fsxj6tmdpr5luh82m5pa3u6ukcrr == \b\d\g\7\t\s\z\s\r\o\1\9\6\k\6\x\y\z\7\h\z\t\q\z\o\j\j\m\7\y\k\2\y\h\5\3\k\e\9\r\g\z\x\b\4\o\i\z\9\7\u\g\6\7\9\d\3\s\d\1\8\q\m\p\7\3\4\t\o\9\b\m\x\2\3\j\5\s\f\s\v\g\n\y\y\0\3\s\n\g\l\t\q\7\7\8\n\y\w\h\9\1\w\2\h\7\6\m\g\5\i\0\l\n\0\g\t\l\e\c\g\v\a\q\y\o\2\j\d\e\k\0\2\x\t\0\u\j\h\d\9\3\t\n\r\s\d\u\r\b\e\4\6\w\d\2\0\t\e\2\k\1\9\d\3\r\3\z\e\w\5\b\f\a\e\c\n\a\x\8\f\u\z\4\l\e\5\v\d\6\m\7\9\b\u\a\3\2\d\f\l\6\v\e\p\4\r\j\j\a\h\8\w\d\o\r\6\l\q\6\b\o\4\8\v\h\q\z\d\u\w\k\g\4\s\q\o\z\q\g\w\8\m\s\u\3\7\e\z\w\1\r\e\7\y\l\2\n\l\9\8\q\k\p\d\x\r\l\f\2\o\c\y\u\f\c\c\t\6\1\7\e\u\y\2\1\8\7\m\w\o\6\n\r\m\w\b\e\z\8\4\s\v\n\k\g\a\k\8\m\v\2\n\c\6\o\v\c\k\7\i\6\w\c\r\4\g\e\y\y\p\e\7\t\a\s\z\w\q\6\i\1\5\f\1\n\4\b\w\5\k\o\r\h\k\2\o\0\g\c\k\4\3\d\6\h\9\m\4\n\k\s\t\z\b\r\r\y\2\t\3\5\x\r\l\t\b\c\0\5\7\z\m\m\1\6\b\9\9\5\2\c\b\l\m\1\d\u\6\k\9\i\t\r\j\6\o\u\n\d\j\m\2\5\0\i\x\2\t\3\1\2\4\9\y\9\8\3\r\p\9\c\w\3\l\u\v\a\f\5\9\6\q\p\k\1\r\f\p\a\t\d\z\q\t\6\u\e\p\0\m\w\d\v\z\8\i\q\2\1\x\9\b\8\f\s\x\j\6\t\m\d\p\r\5\l\u\h\8\2\m\5\p\a\3\u\6\u\k\c\r\r ]] 00:23:51.261 21:06:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:51.261 21:06:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:23:51.261 [2024-06-09 21:06:19.377883] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:51.261 [2024-06-09 21:06:19.378080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128564 ] 00:23:51.535 [2024-06-09 21:06:19.545276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.795 [2024-06-09 21:06:19.716467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.173  Copying: 512/512 [B] (average 166 kBps) 00:23:53.173 00:23:53.173 21:06:20 -- dd/posix.sh@93 -- # [[ bdg7tszsro196k6xyz7hztqzojjm7yk2yh53ke9rgzxb4oiz97ug679d3sd18qmp734to9bmx23j5sfsvgnyy03sngltq778nywh91w2h76mg5i0ln0gtlecgvaqyo2jdek02xt0ujhd93tnrsdurbe46wd20te2k19d3r3zew5bfaecnax8fuz4le5vd6m79bua32dfl6vep4rjjah8wdor6lq6bo48vhqzduwkg4sqozqgw8msu37ezw1re7yl2nl98qkpdxrlf2ocyufcct617euy2187mwo6nrmwbez84svnkgak8mv2nc6ovck7i6wcr4geyype7taszwq6i15f1n4bw5korhk2o0gck43d6h9m4nkstzbrry2t35xrltbc057zmm16b9952cblm1du6k9itrj6oundjm250ix2t31249y983rp9cw3luvaf596qpk1rfpatdzqt6uep0mwdvz8iq21x9b8fsxj6tmdpr5luh82m5pa3u6ukcrr == \b\d\g\7\t\s\z\s\r\o\1\9\6\k\6\x\y\z\7\h\z\t\q\z\o\j\j\m\7\y\k\2\y\h\5\3\k\e\9\r\g\z\x\b\4\o\i\z\9\7\u\g\6\7\9\d\3\s\d\1\8\q\m\p\7\3\4\t\o\9\b\m\x\2\3\j\5\s\f\s\v\g\n\y\y\0\3\s\n\g\l\t\q\7\7\8\n\y\w\h\9\1\w\2\h\7\6\m\g\5\i\0\l\n\0\g\t\l\e\c\g\v\a\q\y\o\2\j\d\e\k\0\2\x\t\0\u\j\h\d\9\3\t\n\r\s\d\u\r\b\e\4\6\w\d\2\0\t\e\2\k\1\9\d\3\r\3\z\e\w\5\b\f\a\e\c\n\a\x\8\f\u\z\4\l\e\5\v\d\6\m\7\9\b\u\a\3\2\d\f\l\6\v\e\p\4\r\j\j\a\h\8\w\d\o\r\6\l\q\6\b\o\4\8\v\h\q\z\d\u\w\k\g\4\s\q\o\z\q\g\w\8\m\s\u\3\7\e\z\w\1\r\e\7\y\l\2\n\l\9\8\q\k\p\d\x\r\l\f\2\o\c\y\u\f\c\c\t\6\1\7\e\u\y\2\1\8\7\m\w\o\6\n\r\m\w\b\e\z\8\4\s\v\n\k\g\a\k\8\m\v\2\n\c\6\o\v\c\k\7\i\6\w\c\r\4\g\e\y\y\p\e\7\t\a\s\z\w\q\6\i\1\5\f\1\n\4\b\w\5\k\o\r\h\k\2\o\0\g\c\k\4\3\d\6\h\9\m\4\n\k\s\t\z\b\r\r\y\2\t\3\5\x\r\l\t\b\c\0\5\7\z\m\m\1\6\b\9\9\5\2\c\b\l\m\1\d\u\6\k\9\i\t\r\j\6\o\u\n\d\j\m\2\5\0\i\x\2\t\3\1\2\4\9\y\9\8\3\r\p\9\c\w\3\l\u\v\a\f\5\9\6\q\p\k\1\r\f\p\a\t\d\z\q\t\6\u\e\p\0\m\w\d\v\z\8\i\q\2\1\x\9\b\8\f\s\x\j\6\t\m\d\p\r\5\l\u\h\8\2\m\5\p\a\3\u\6\u\k\c\r\r ]] 00:23:53.173 21:06:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:23:53.173 21:06:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:23:53.173 [2024-06-09 21:06:21.016025] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:53.173 [2024-06-09 21:06:21.016214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128582 ] 00:23:53.173 [2024-06-09 21:06:21.179339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.432 [2024-06-09 21:06:21.359516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.626  Copying: 512/512 [B] (average 166 kBps) 00:23:54.626 00:23:54.626 21:06:22 -- dd/posix.sh@93 -- # [[ bdg7tszsro196k6xyz7hztqzojjm7yk2yh53ke9rgzxb4oiz97ug679d3sd18qmp734to9bmx23j5sfsvgnyy03sngltq778nywh91w2h76mg5i0ln0gtlecgvaqyo2jdek02xt0ujhd93tnrsdurbe46wd20te2k19d3r3zew5bfaecnax8fuz4le5vd6m79bua32dfl6vep4rjjah8wdor6lq6bo48vhqzduwkg4sqozqgw8msu37ezw1re7yl2nl98qkpdxrlf2ocyufcct617euy2187mwo6nrmwbez84svnkgak8mv2nc6ovck7i6wcr4geyype7taszwq6i15f1n4bw5korhk2o0gck43d6h9m4nkstzbrry2t35xrltbc057zmm16b9952cblm1du6k9itrj6oundjm250ix2t31249y983rp9cw3luvaf596qpk1rfpatdzqt6uep0mwdvz8iq21x9b8fsxj6tmdpr5luh82m5pa3u6ukcrr == \b\d\g\7\t\s\z\s\r\o\1\9\6\k\6\x\y\z\7\h\z\t\q\z\o\j\j\m\7\y\k\2\y\h\5\3\k\e\9\r\g\z\x\b\4\o\i\z\9\7\u\g\6\7\9\d\3\s\d\1\8\q\m\p\7\3\4\t\o\9\b\m\x\2\3\j\5\s\f\s\v\g\n\y\y\0\3\s\n\g\l\t\q\7\7\8\n\y\w\h\9\1\w\2\h\7\6\m\g\5\i\0\l\n\0\g\t\l\e\c\g\v\a\q\y\o\2\j\d\e\k\0\2\x\t\0\u\j\h\d\9\3\t\n\r\s\d\u\r\b\e\4\6\w\d\2\0\t\e\2\k\1\9\d\3\r\3\z\e\w\5\b\f\a\e\c\n\a\x\8\f\u\z\4\l\e\5\v\d\6\m\7\9\b\u\a\3\2\d\f\l\6\v\e\p\4\r\j\j\a\h\8\w\d\o\r\6\l\q\6\b\o\4\8\v\h\q\z\d\u\w\k\g\4\s\q\o\z\q\g\w\8\m\s\u\3\7\e\z\w\1\r\e\7\y\l\2\n\l\9\8\q\k\p\d\x\r\l\f\2\o\c\y\u\f\c\c\t\6\1\7\e\u\y\2\1\8\7\m\w\o\6\n\r\m\w\b\e\z\8\4\s\v\n\k\g\a\k\8\m\v\2\n\c\6\o\v\c\k\7\i\6\w\c\r\4\g\e\y\y\p\e\7\t\a\s\z\w\q\6\i\1\5\f\1\n\4\b\w\5\k\o\r\h\k\2\o\0\g\c\k\4\3\d\6\h\9\m\4\n\k\s\t\z\b\r\r\y\2\t\3\5\x\r\l\t\b\c\0\5\7\z\m\m\1\6\b\9\9\5\2\c\b\l\m\1\d\u\6\k\9\i\t\r\j\6\o\u\n\d\j\m\2\5\0\i\x\2\t\3\1\2\4\9\y\9\8\3\r\p\9\c\w\3\l\u\v\a\f\5\9\6\q\p\k\1\r\f\p\a\t\d\z\q\t\6\u\e\p\0\m\w\d\v\z\8\i\q\2\1\x\9\b\8\f\s\x\j\6\t\m\d\p\r\5\l\u\h\8\2\m\5\p\a\3\u\6\u\k\c\r\r ]] 00:23:54.626 00:23:54.626 real 0m13.239s 00:23:54.626 user 0m10.423s 00:23:54.626 sys 0m1.746s 00:23:54.626 21:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.626 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:23:54.626 ************************************ 00:23:54.626 END TEST dd_flags_misc 00:23:54.626 ************************************ 00:23:54.626 21:06:22 -- dd/posix.sh@131 -- # tests_forced_aio 00:23:54.626 21:06:22 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:23:54.626 * Second test run, using AIO 00:23:54.626 21:06:22 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:23:54.626 21:06:22 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:23:54.626 21:06:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:54.626 21:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:54.626 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:23:54.626 ************************************ 00:23:54.626 START TEST dd_flag_append_forced_aio 00:23:54.626 ************************************ 00:23:54.626 21:06:22 -- common/autotest_common.sh@1104 -- # append 00:23:54.626 21:06:22 -- dd/posix.sh@16 -- # local dump0 00:23:54.626 21:06:22 -- dd/posix.sh@17 -- # local dump1 00:23:54.626 21:06:22 -- dd/posix.sh@19 -- # gen_bytes 32 00:23:54.626 21:06:22 -- dd/common.sh@98 -- # xtrace_disable 00:23:54.626 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:23:54.626 21:06:22 -- dd/posix.sh@19 -- # dump0=8oc29m6g7495yic22l97dq0tzk781uai 00:23:54.626 21:06:22 -- dd/posix.sh@20 -- # gen_bytes 32 00:23:54.626 21:06:22 -- dd/common.sh@98 -- # xtrace_disable 00:23:54.626 21:06:22 -- common/autotest_common.sh@10 -- # set +x 00:23:54.626 21:06:22 -- dd/posix.sh@20 -- # dump1=yn4756zfwv5rn2sag7lycw89yjxk2mms 00:23:54.626 21:06:22 -- dd/posix.sh@22 -- # printf %s 8oc29m6g7495yic22l97dq0tzk781uai 00:23:54.626 21:06:22 -- dd/posix.sh@23 -- # printf %s yn4756zfwv5rn2sag7lycw89yjxk2mms 00:23:54.626 21:06:22 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:23:54.626 [2024-06-09 21:06:22.737172] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:54.626 [2024-06-09 21:06:22.737384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128627 ] 00:23:54.884 [2024-06-09 21:06:22.904753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.143 [2024-06-09 21:06:23.097376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.338  Copying: 32/32 [B] (average 31 kBps) 00:23:56.338 00:23:56.338 21:06:24 -- dd/posix.sh@27 -- # [[ yn4756zfwv5rn2sag7lycw89yjxk2mms8oc29m6g7495yic22l97dq0tzk781uai == \y\n\4\7\5\6\z\f\w\v\5\r\n\2\s\a\g\7\l\y\c\w\8\9\y\j\x\k\2\m\m\s\8\o\c\2\9\m\6\g\7\4\9\5\y\i\c\2\2\l\9\7\d\q\0\t\z\k\7\8\1\u\a\i ]] 00:23:56.338 00:23:56.338 real 0m1.695s 00:23:56.338 user 0m1.344s 00:23:56.338 sys 0m0.220s 00:23:56.338 ************************************ 00:23:56.338 END TEST dd_flag_append_forced_aio 00:23:56.338 ************************************ 00:23:56.338 21:06:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.338 21:06:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.338 21:06:24 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:23:56.338 21:06:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:56.338 21:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:56.338 21:06:24 -- common/autotest_common.sh@10 -- # set +x 00:23:56.339 ************************************ 00:23:56.339 START TEST dd_flag_directory_forced_aio 00:23:56.339 ************************************ 00:23:56.339 21:06:24 -- common/autotest_common.sh@1104 -- # directory 00:23:56.339 21:06:24 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:56.339 21:06:24 -- common/autotest_common.sh@640 -- # local es=0 00:23:56.339 21:06:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:56.339 21:06:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.339 21:06:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:56.339 21:06:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.339 21:06:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:56.339 21:06:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.339 21:06:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:56.339 21:06:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.339 21:06:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:56.339 21:06:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:23:56.339 [2024-06-09 21:06:24.480764] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:56.339 [2024-06-09 21:06:24.480972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128682 ] 00:23:56.597 [2024-06-09 21:06:24.649006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.856 [2024-06-09 21:06:24.819551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.114 [2024-06-09 21:06:25.087626] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:57.114 [2024-06-09 21:06:25.087940] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:57.114 [2024-06-09 21:06:25.088005] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:57.682 [2024-06-09 21:06:25.728361] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:57.941 21:06:26 -- common/autotest_common.sh@643 -- # es=236 00:23:57.941 21:06:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:57.941 21:06:26 -- common/autotest_common.sh@652 -- # es=108 00:23:57.941 21:06:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:57.941 21:06:26 -- common/autotest_common.sh@660 -- # es=1 00:23:57.941 21:06:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:57.941 21:06:26 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:57.941 21:06:26 -- common/autotest_common.sh@640 -- # local es=0 00:23:57.941 21:06:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:57.941 21:06:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.941 21:06:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:57.941 21:06:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.941 21:06:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:57.941 21:06:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.941 21:06:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:57.941 21:06:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.941 21:06:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:57.941 21:06:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:23:58.200 [2024-06-09 21:06:26.164867] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:58.200 [2024-06-09 21:06:26.165072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128703 ] 00:23:58.200 [2024-06-09 21:06:26.332934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.459 [2024-06-09 21:06:26.516125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.718 [2024-06-09 21:06:26.805605] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:58.718 [2024-06-09 21:06:26.806050] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:23:58.718 [2024-06-09 21:06:26.806119] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:59.285 [2024-06-09 21:06:27.442067] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:23:59.853 21:06:27 -- common/autotest_common.sh@643 -- # es=236 00:23:59.853 ************************************ 00:23:59.853 END TEST dd_flag_directory_forced_aio 00:23:59.853 ************************************ 00:23:59.853 21:06:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:59.853 21:06:27 -- common/autotest_common.sh@652 -- # es=108 00:23:59.853 21:06:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:23:59.853 21:06:27 -- common/autotest_common.sh@660 -- # es=1 00:23:59.853 21:06:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:59.853 00:23:59.853 real 0m3.397s 00:23:59.853 user 0m2.681s 00:23:59.853 sys 0m0.491s 00:23:59.853 21:06:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.853 21:06:27 -- common/autotest_common.sh@10 -- # set +x 00:23:59.853 21:06:27 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:23:59.853 21:06:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:59.853 21:06:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:59.853 21:06:27 -- common/autotest_common.sh@10 -- # set +x 00:23:59.853 ************************************ 00:23:59.853 START TEST dd_flag_nofollow_forced_aio 00:23:59.853 ************************************ 00:23:59.853 21:06:27 -- common/autotest_common.sh@1104 -- # nofollow 00:23:59.853 21:06:27 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:59.853 21:06:27 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:59.853 21:06:27 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:23:59.853 21:06:27 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:23:59.853 21:06:27 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:59.853 21:06:27 -- common/autotest_common.sh@640 -- # local es=0 00:23:59.853 21:06:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:59.853 21:06:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:59.853 21:06:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:59.853 21:06:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:59.853 21:06:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:59.853 21:06:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:59.853 21:06:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:59.853 21:06:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:59.853 21:06:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:23:59.853 21:06:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:23:59.853 [2024-06-09 21:06:27.936555] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:59.853 [2024-06-09 21:06:27.936740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128754 ] 00:24:00.112 [2024-06-09 21:06:28.090589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.112 [2024-06-09 21:06:28.261649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.371 [2024-06-09 21:06:28.514008] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:24:00.371 [2024-06-09 21:06:28.514373] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:24:00.371 [2024-06-09 21:06:28.514438] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:00.951 [2024-06-09 21:06:29.104466] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:01.530 21:06:29 -- common/autotest_common.sh@643 -- # es=216 00:24:01.530 21:06:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:01.531 21:06:29 -- common/autotest_common.sh@652 -- # es=88 00:24:01.531 21:06:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:24:01.531 21:06:29 -- common/autotest_common.sh@660 -- # es=1 00:24:01.531 21:06:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:01.531 21:06:29 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:01.531 21:06:29 -- common/autotest_common.sh@640 -- # local es=0 00:24:01.531 21:06:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:01.531 21:06:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:01.531 21:06:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:01.531 21:06:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:01.531 21:06:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:01.531 21:06:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:01.531 21:06:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:01.531 21:06:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:01.531 21:06:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:01.531 21:06:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:01.531 [2024-06-09 21:06:29.534485] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:01.531 [2024-06-09 21:06:29.534713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128781 ] 00:24:01.531 [2024-06-09 21:06:29.701878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.788 [2024-06-09 21:06:29.880329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.047 [2024-06-09 21:06:30.139727] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:24:02.047 [2024-06-09 21:06:30.140111] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:24:02.047 [2024-06-09 21:06:30.140184] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:02.613 [2024-06-09 21:06:30.739168] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:03.179 21:06:31 -- common/autotest_common.sh@643 -- # es=216 00:24:03.179 21:06:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:03.179 21:06:31 -- common/autotest_common.sh@652 -- # es=88 00:24:03.179 21:06:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:24:03.179 21:06:31 -- common/autotest_common.sh@660 -- # es=1 00:24:03.179 21:06:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:03.179 21:06:31 -- dd/posix.sh@46 -- # gen_bytes 512 00:24:03.179 21:06:31 -- dd/common.sh@98 -- # xtrace_disable 00:24:03.179 21:06:31 -- common/autotest_common.sh@10 -- # set +x 00:24:03.179 21:06:31 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:03.179 [2024-06-09 21:06:31.152498] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:03.179 [2024-06-09 21:06:31.152733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128796 ] 00:24:03.179 [2024-06-09 21:06:31.311151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.437 [2024-06-09 21:06:31.487148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.628  Copying: 512/512 [B] (average 500 kBps) 00:24:04.628 00:24:04.628 21:06:32 -- dd/posix.sh@49 -- # [[ 0ckfm4neep62p21zrkhoqrlytcmqadnlxnb41x09sbrhn82xz1i28196thrzsxuszesn98bh7utlielpf37394dzrwdw8ywgcv05tgi2co7o4wv0h7ar678er6oqybzqi08l09uqp9875bejfachxugcm94qd8ypju1gq85qjj5c3obafqmk0rst6gruyk52bja7y3k496h9kpzuu21zvzgit3my7ej35pz29vgnsruyt4in2je1uf7ysptap1wzxaqepxxhii6phouuodsbxk91ls2ujconr535dsaya17eeohyv685qlsxth44a8nrqhfegliq13sygq0niqnoqv60s2p8een9fsiugerlvl3c5qd3q923awmi0jsgoglz7y0r6g3mcprdxknk7e7lphi0bfs7an7hp03fdr0dxi97q5dhw1e70kmbmvtfnxrjejt86hf2tk0ohdts04qjn5e5k4gtkonahydzgi8ckwulbmtq9m0niv40cp2rpzf9 == \0\c\k\f\m\4\n\e\e\p\6\2\p\2\1\z\r\k\h\o\q\r\l\y\t\c\m\q\a\d\n\l\x\n\b\4\1\x\0\9\s\b\r\h\n\8\2\x\z\1\i\2\8\1\9\6\t\h\r\z\s\x\u\s\z\e\s\n\9\8\b\h\7\u\t\l\i\e\l\p\f\3\7\3\9\4\d\z\r\w\d\w\8\y\w\g\c\v\0\5\t\g\i\2\c\o\7\o\4\w\v\0\h\7\a\r\6\7\8\e\r\6\o\q\y\b\z\q\i\0\8\l\0\9\u\q\p\9\8\7\5\b\e\j\f\a\c\h\x\u\g\c\m\9\4\q\d\8\y\p\j\u\1\g\q\8\5\q\j\j\5\c\3\o\b\a\f\q\m\k\0\r\s\t\6\g\r\u\y\k\5\2\b\j\a\7\y\3\k\4\9\6\h\9\k\p\z\u\u\2\1\z\v\z\g\i\t\3\m\y\7\e\j\3\5\p\z\2\9\v\g\n\s\r\u\y\t\4\i\n\2\j\e\1\u\f\7\y\s\p\t\a\p\1\w\z\x\a\q\e\p\x\x\h\i\i\6\p\h\o\u\u\o\d\s\b\x\k\9\1\l\s\2\u\j\c\o\n\r\5\3\5\d\s\a\y\a\1\7\e\e\o\h\y\v\6\8\5\q\l\s\x\t\h\4\4\a\8\n\r\q\h\f\e\g\l\i\q\1\3\s\y\g\q\0\n\i\q\n\o\q\v\6\0\s\2\p\8\e\e\n\9\f\s\i\u\g\e\r\l\v\l\3\c\5\q\d\3\q\9\2\3\a\w\m\i\0\j\s\g\o\g\l\z\7\y\0\r\6\g\3\m\c\p\r\d\x\k\n\k\7\e\7\l\p\h\i\0\b\f\s\7\a\n\7\h\p\0\3\f\d\r\0\d\x\i\9\7\q\5\d\h\w\1\e\7\0\k\m\b\m\v\t\f\n\x\r\j\e\j\t\8\6\h\f\2\t\k\0\o\h\d\t\s\0\4\q\j\n\5\e\5\k\4\g\t\k\o\n\a\h\y\d\z\g\i\8\c\k\w\u\l\b\m\t\q\9\m\0\n\i\v\4\0\c\p\2\r\p\z\f\9 ]] 00:24:04.628 00:24:04.628 real 0m4.852s 00:24:04.628 user 0m3.804s 00:24:04.628 sys 0m0.708s 00:24:04.628 21:06:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.628 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:04.628 ************************************ 00:24:04.628 END TEST dd_flag_nofollow_forced_aio 00:24:04.628 ************************************ 00:24:04.628 21:06:32 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:24:04.628 21:06:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:04.628 21:06:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:04.628 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:04.628 ************************************ 00:24:04.628 START TEST dd_flag_noatime_forced_aio 00:24:04.628 ************************************ 00:24:04.628 21:06:32 -- common/autotest_common.sh@1104 -- # noatime 00:24:04.628 21:06:32 -- dd/posix.sh@53 -- # local atime_if 00:24:04.628 21:06:32 -- dd/posix.sh@54 -- # local atime_of 00:24:04.628 21:06:32 -- dd/posix.sh@58 -- # gen_bytes 512 00:24:04.628 21:06:32 -- dd/common.sh@98 -- # xtrace_disable 00:24:04.628 21:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:04.628 21:06:32 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:04.628 21:06:32 -- dd/posix.sh@60 -- # atime_if=1717967191 00:24:04.628 21:06:32 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:04.628 21:06:32 -- dd/posix.sh@61 -- # atime_of=1717967192 00:24:04.628 21:06:32 -- dd/posix.sh@66 -- # sleep 1 00:24:06.000 21:06:33 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:06.000 [2024-06-09 21:06:33.857968] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:06.000 [2024-06-09 21:06:33.858241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128860 ] 00:24:06.000 [2024-06-09 21:06:34.032577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.258 [2024-06-09 21:06:34.264104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.452  Copying: 512/512 [B] (average 500 kBps) 00:24:07.452 00:24:07.452 21:06:35 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:07.452 21:06:35 -- dd/posix.sh@69 -- # (( atime_if == 1717967191 )) 00:24:07.452 21:06:35 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:07.452 21:06:35 -- dd/posix.sh@70 -- # (( atime_of == 1717967192 )) 00:24:07.452 21:06:35 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:07.711 [2024-06-09 21:06:35.686714] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:07.712 [2024-06-09 21:06:35.686978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128893 ] 00:24:07.712 [2024-06-09 21:06:35.854578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.971 [2024-06-09 21:06:36.068849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.607  Copying: 512/512 [B] (average 500 kBps) 00:24:09.607 00:24:09.607 21:06:37 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:09.607 21:06:37 -- dd/posix.sh@73 -- # (( atime_if < 1717967196 )) 00:24:09.607 00:24:09.607 real 0m4.650s 00:24:09.607 user 0m2.821s 00:24:09.607 sys 0m0.569s 00:24:09.607 21:06:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.607 21:06:37 -- common/autotest_common.sh@10 -- # set +x 00:24:09.607 ************************************ 00:24:09.607 END TEST dd_flag_noatime_forced_aio 00:24:09.607 ************************************ 00:24:09.607 21:06:37 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:24:09.607 21:06:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:09.607 21:06:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:09.607 21:06:37 -- common/autotest_common.sh@10 -- # set +x 00:24:09.607 ************************************ 00:24:09.607 START TEST dd_flags_misc_forced_aio 00:24:09.607 ************************************ 00:24:09.607 21:06:37 -- common/autotest_common.sh@1104 -- # io 00:24:09.607 21:06:37 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:24:09.607 21:06:37 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:24:09.607 21:06:37 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:24:09.607 21:06:37 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:24:09.607 21:06:37 -- dd/posix.sh@86 -- # gen_bytes 512 00:24:09.608 21:06:37 -- dd/common.sh@98 -- # xtrace_disable 00:24:09.608 21:06:37 -- common/autotest_common.sh@10 -- # set +x 00:24:09.608 21:06:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:09.608 21:06:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:24:09.608 [2024-06-09 21:06:37.546551] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:09.608 [2024-06-09 21:06:37.546684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128936 ] 00:24:09.608 [2024-06-09 21:06:37.699262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.867 [2024-06-09 21:06:37.883341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.112  Copying: 512/512 [B] (average 500 kBps) 00:24:11.112 00:24:11.112 21:06:39 -- dd/posix.sh@93 -- # [[ crl4pmis64vgl84b1xckd06y2r9j5abp9gi3ost4vdmdxo5lv1r9geh74hktp7tw1kwee8as2479ytb4vw92u4vp8feteyh7ul17dy7mqk663lu0wbq519cj9x8gpu048bkc1byur2srepnz9fd5j84ivq26v4j1oltfwb4cib210krf1abwjd43n12ooyrjsjy6anviy5ps0aouud4z01xjsn239edb9j65knl78a6nm9gxsiyb1ho05mai5u9grvd3kie2gsqjbea0u28pwzhoj36krlysckc8wgxhdkq1o3zua43zuub75o2ad33er2dup7bosvwsrxgr0awwroeia4sj3jtvo7hwa17t5au9qulsexdimdcf87squ6ntknprpotmc95naw33dtl39oybrtovuk62d0dn3bzgtwlhwciw82gw6ttc22jresbzdbwdh5g9xvxh9rdcd1q681o9j1lx1vz8seyixhsxbuu7mekqnkmcr2rg6zpzx7lu == \c\r\l\4\p\m\i\s\6\4\v\g\l\8\4\b\1\x\c\k\d\0\6\y\2\r\9\j\5\a\b\p\9\g\i\3\o\s\t\4\v\d\m\d\x\o\5\l\v\1\r\9\g\e\h\7\4\h\k\t\p\7\t\w\1\k\w\e\e\8\a\s\2\4\7\9\y\t\b\4\v\w\9\2\u\4\v\p\8\f\e\t\e\y\h\7\u\l\1\7\d\y\7\m\q\k\6\6\3\l\u\0\w\b\q\5\1\9\c\j\9\x\8\g\p\u\0\4\8\b\k\c\1\b\y\u\r\2\s\r\e\p\n\z\9\f\d\5\j\8\4\i\v\q\2\6\v\4\j\1\o\l\t\f\w\b\4\c\i\b\2\1\0\k\r\f\1\a\b\w\j\d\4\3\n\1\2\o\o\y\r\j\s\j\y\6\a\n\v\i\y\5\p\s\0\a\o\u\u\d\4\z\0\1\x\j\s\n\2\3\9\e\d\b\9\j\6\5\k\n\l\7\8\a\6\n\m\9\g\x\s\i\y\b\1\h\o\0\5\m\a\i\5\u\9\g\r\v\d\3\k\i\e\2\g\s\q\j\b\e\a\0\u\2\8\p\w\z\h\o\j\3\6\k\r\l\y\s\c\k\c\8\w\g\x\h\d\k\q\1\o\3\z\u\a\4\3\z\u\u\b\7\5\o\2\a\d\3\3\e\r\2\d\u\p\7\b\o\s\v\w\s\r\x\g\r\0\a\w\w\r\o\e\i\a\4\s\j\3\j\t\v\o\7\h\w\a\1\7\t\5\a\u\9\q\u\l\s\e\x\d\i\m\d\c\f\8\7\s\q\u\6\n\t\k\n\p\r\p\o\t\m\c\9\5\n\a\w\3\3\d\t\l\3\9\o\y\b\r\t\o\v\u\k\6\2\d\0\d\n\3\b\z\g\t\w\l\h\w\c\i\w\8\2\g\w\6\t\t\c\2\2\j\r\e\s\b\z\d\b\w\d\h\5\g\9\x\v\x\h\9\r\d\c\d\1\q\6\8\1\o\9\j\1\l\x\1\v\z\8\s\e\y\i\x\h\s\x\b\u\u\7\m\e\k\q\n\k\m\c\r\2\r\g\6\z\p\z\x\7\l\u ]] 00:24:11.112 21:06:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:11.112 21:06:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:24:11.370 [2024-06-09 21:06:39.293643] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:11.370 [2024-06-09 21:06:39.293834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128962 ] 00:24:11.370 [2024-06-09 21:06:39.460980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.719 [2024-06-09 21:06:39.653727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.915  Copying: 512/512 [B] (average 500 kBps) 00:24:12.915 00:24:12.916 21:06:41 -- dd/posix.sh@93 -- # [[ crl4pmis64vgl84b1xckd06y2r9j5abp9gi3ost4vdmdxo5lv1r9geh74hktp7tw1kwee8as2479ytb4vw92u4vp8feteyh7ul17dy7mqk663lu0wbq519cj9x8gpu048bkc1byur2srepnz9fd5j84ivq26v4j1oltfwb4cib210krf1abwjd43n12ooyrjsjy6anviy5ps0aouud4z01xjsn239edb9j65knl78a6nm9gxsiyb1ho05mai5u9grvd3kie2gsqjbea0u28pwzhoj36krlysckc8wgxhdkq1o3zua43zuub75o2ad33er2dup7bosvwsrxgr0awwroeia4sj3jtvo7hwa17t5au9qulsexdimdcf87squ6ntknprpotmc95naw33dtl39oybrtovuk62d0dn3bzgtwlhwciw82gw6ttc22jresbzdbwdh5g9xvxh9rdcd1q681o9j1lx1vz8seyixhsxbuu7mekqnkmcr2rg6zpzx7lu == \c\r\l\4\p\m\i\s\6\4\v\g\l\8\4\b\1\x\c\k\d\0\6\y\2\r\9\j\5\a\b\p\9\g\i\3\o\s\t\4\v\d\m\d\x\o\5\l\v\1\r\9\g\e\h\7\4\h\k\t\p\7\t\w\1\k\w\e\e\8\a\s\2\4\7\9\y\t\b\4\v\w\9\2\u\4\v\p\8\f\e\t\e\y\h\7\u\l\1\7\d\y\7\m\q\k\6\6\3\l\u\0\w\b\q\5\1\9\c\j\9\x\8\g\p\u\0\4\8\b\k\c\1\b\y\u\r\2\s\r\e\p\n\z\9\f\d\5\j\8\4\i\v\q\2\6\v\4\j\1\o\l\t\f\w\b\4\c\i\b\2\1\0\k\r\f\1\a\b\w\j\d\4\3\n\1\2\o\o\y\r\j\s\j\y\6\a\n\v\i\y\5\p\s\0\a\o\u\u\d\4\z\0\1\x\j\s\n\2\3\9\e\d\b\9\j\6\5\k\n\l\7\8\a\6\n\m\9\g\x\s\i\y\b\1\h\o\0\5\m\a\i\5\u\9\g\r\v\d\3\k\i\e\2\g\s\q\j\b\e\a\0\u\2\8\p\w\z\h\o\j\3\6\k\r\l\y\s\c\k\c\8\w\g\x\h\d\k\q\1\o\3\z\u\a\4\3\z\u\u\b\7\5\o\2\a\d\3\3\e\r\2\d\u\p\7\b\o\s\v\w\s\r\x\g\r\0\a\w\w\r\o\e\i\a\4\s\j\3\j\t\v\o\7\h\w\a\1\7\t\5\a\u\9\q\u\l\s\e\x\d\i\m\d\c\f\8\7\s\q\u\6\n\t\k\n\p\r\p\o\t\m\c\9\5\n\a\w\3\3\d\t\l\3\9\o\y\b\r\t\o\v\u\k\6\2\d\0\d\n\3\b\z\g\t\w\l\h\w\c\i\w\8\2\g\w\6\t\t\c\2\2\j\r\e\s\b\z\d\b\w\d\h\5\g\9\x\v\x\h\9\r\d\c\d\1\q\6\8\1\o\9\j\1\l\x\1\v\z\8\s\e\y\i\x\h\s\x\b\u\u\7\m\e\k\q\n\k\m\c\r\2\r\g\6\z\p\z\x\7\l\u ]] 00:24:12.916 21:06:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:12.916 21:06:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:24:12.916 [2024-06-09 21:06:41.066474] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:12.916 [2024-06-09 21:06:41.066634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128986 ] 00:24:13.175 [2024-06-09 21:06:41.227164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.434 [2024-06-09 21:06:41.497270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.631  Copying: 512/512 [B] (average 250 kBps) 00:24:14.631 00:24:14.631 21:06:42 -- dd/posix.sh@93 -- # [[ crl4pmis64vgl84b1xckd06y2r9j5abp9gi3ost4vdmdxo5lv1r9geh74hktp7tw1kwee8as2479ytb4vw92u4vp8feteyh7ul17dy7mqk663lu0wbq519cj9x8gpu048bkc1byur2srepnz9fd5j84ivq26v4j1oltfwb4cib210krf1abwjd43n12ooyrjsjy6anviy5ps0aouud4z01xjsn239edb9j65knl78a6nm9gxsiyb1ho05mai5u9grvd3kie2gsqjbea0u28pwzhoj36krlysckc8wgxhdkq1o3zua43zuub75o2ad33er2dup7bosvwsrxgr0awwroeia4sj3jtvo7hwa17t5au9qulsexdimdcf87squ6ntknprpotmc95naw33dtl39oybrtovuk62d0dn3bzgtwlhwciw82gw6ttc22jresbzdbwdh5g9xvxh9rdcd1q681o9j1lx1vz8seyixhsxbuu7mekqnkmcr2rg6zpzx7lu == \c\r\l\4\p\m\i\s\6\4\v\g\l\8\4\b\1\x\c\k\d\0\6\y\2\r\9\j\5\a\b\p\9\g\i\3\o\s\t\4\v\d\m\d\x\o\5\l\v\1\r\9\g\e\h\7\4\h\k\t\p\7\t\w\1\k\w\e\e\8\a\s\2\4\7\9\y\t\b\4\v\w\9\2\u\4\v\p\8\f\e\t\e\y\h\7\u\l\1\7\d\y\7\m\q\k\6\6\3\l\u\0\w\b\q\5\1\9\c\j\9\x\8\g\p\u\0\4\8\b\k\c\1\b\y\u\r\2\s\r\e\p\n\z\9\f\d\5\j\8\4\i\v\q\2\6\v\4\j\1\o\l\t\f\w\b\4\c\i\b\2\1\0\k\r\f\1\a\b\w\j\d\4\3\n\1\2\o\o\y\r\j\s\j\y\6\a\n\v\i\y\5\p\s\0\a\o\u\u\d\4\z\0\1\x\j\s\n\2\3\9\e\d\b\9\j\6\5\k\n\l\7\8\a\6\n\m\9\g\x\s\i\y\b\1\h\o\0\5\m\a\i\5\u\9\g\r\v\d\3\k\i\e\2\g\s\q\j\b\e\a\0\u\2\8\p\w\z\h\o\j\3\6\k\r\l\y\s\c\k\c\8\w\g\x\h\d\k\q\1\o\3\z\u\a\4\3\z\u\u\b\7\5\o\2\a\d\3\3\e\r\2\d\u\p\7\b\o\s\v\w\s\r\x\g\r\0\a\w\w\r\o\e\i\a\4\s\j\3\j\t\v\o\7\h\w\a\1\7\t\5\a\u\9\q\u\l\s\e\x\d\i\m\d\c\f\8\7\s\q\u\6\n\t\k\n\p\r\p\o\t\m\c\9\5\n\a\w\3\3\d\t\l\3\9\o\y\b\r\t\o\v\u\k\6\2\d\0\d\n\3\b\z\g\t\w\l\h\w\c\i\w\8\2\g\w\6\t\t\c\2\2\j\r\e\s\b\z\d\b\w\d\h\5\g\9\x\v\x\h\9\r\d\c\d\1\q\6\8\1\o\9\j\1\l\x\1\v\z\8\s\e\y\i\x\h\s\x\b\u\u\7\m\e\k\q\n\k\m\c\r\2\r\g\6\z\p\z\x\7\l\u ]] 00:24:14.631 21:06:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:14.631 21:06:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:24:14.631 [2024-06-09 21:06:42.781692] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:14.631 [2024-06-09 21:06:42.781893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129015 ] 00:24:14.890 [2024-06-09 21:06:42.932460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.149 [2024-06-09 21:06:43.103840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.351  Copying: 512/512 [B] (average 250 kBps) 00:24:16.351 00:24:16.352 21:06:44 -- dd/posix.sh@93 -- # [[ crl4pmis64vgl84b1xckd06y2r9j5abp9gi3ost4vdmdxo5lv1r9geh74hktp7tw1kwee8as2479ytb4vw92u4vp8feteyh7ul17dy7mqk663lu0wbq519cj9x8gpu048bkc1byur2srepnz9fd5j84ivq26v4j1oltfwb4cib210krf1abwjd43n12ooyrjsjy6anviy5ps0aouud4z01xjsn239edb9j65knl78a6nm9gxsiyb1ho05mai5u9grvd3kie2gsqjbea0u28pwzhoj36krlysckc8wgxhdkq1o3zua43zuub75o2ad33er2dup7bosvwsrxgr0awwroeia4sj3jtvo7hwa17t5au9qulsexdimdcf87squ6ntknprpotmc95naw33dtl39oybrtovuk62d0dn3bzgtwlhwciw82gw6ttc22jresbzdbwdh5g9xvxh9rdcd1q681o9j1lx1vz8seyixhsxbuu7mekqnkmcr2rg6zpzx7lu == \c\r\l\4\p\m\i\s\6\4\v\g\l\8\4\b\1\x\c\k\d\0\6\y\2\r\9\j\5\a\b\p\9\g\i\3\o\s\t\4\v\d\m\d\x\o\5\l\v\1\r\9\g\e\h\7\4\h\k\t\p\7\t\w\1\k\w\e\e\8\a\s\2\4\7\9\y\t\b\4\v\w\9\2\u\4\v\p\8\f\e\t\e\y\h\7\u\l\1\7\d\y\7\m\q\k\6\6\3\l\u\0\w\b\q\5\1\9\c\j\9\x\8\g\p\u\0\4\8\b\k\c\1\b\y\u\r\2\s\r\e\p\n\z\9\f\d\5\j\8\4\i\v\q\2\6\v\4\j\1\o\l\t\f\w\b\4\c\i\b\2\1\0\k\r\f\1\a\b\w\j\d\4\3\n\1\2\o\o\y\r\j\s\j\y\6\a\n\v\i\y\5\p\s\0\a\o\u\u\d\4\z\0\1\x\j\s\n\2\3\9\e\d\b\9\j\6\5\k\n\l\7\8\a\6\n\m\9\g\x\s\i\y\b\1\h\o\0\5\m\a\i\5\u\9\g\r\v\d\3\k\i\e\2\g\s\q\j\b\e\a\0\u\2\8\p\w\z\h\o\j\3\6\k\r\l\y\s\c\k\c\8\w\g\x\h\d\k\q\1\o\3\z\u\a\4\3\z\u\u\b\7\5\o\2\a\d\3\3\e\r\2\d\u\p\7\b\o\s\v\w\s\r\x\g\r\0\a\w\w\r\o\e\i\a\4\s\j\3\j\t\v\o\7\h\w\a\1\7\t\5\a\u\9\q\u\l\s\e\x\d\i\m\d\c\f\8\7\s\q\u\6\n\t\k\n\p\r\p\o\t\m\c\9\5\n\a\w\3\3\d\t\l\3\9\o\y\b\r\t\o\v\u\k\6\2\d\0\d\n\3\b\z\g\t\w\l\h\w\c\i\w\8\2\g\w\6\t\t\c\2\2\j\r\e\s\b\z\d\b\w\d\h\5\g\9\x\v\x\h\9\r\d\c\d\1\q\6\8\1\o\9\j\1\l\x\1\v\z\8\s\e\y\i\x\h\s\x\b\u\u\7\m\e\k\q\n\k\m\c\r\2\r\g\6\z\p\z\x\7\l\u ]] 00:24:16.352 21:06:44 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:24:16.352 21:06:44 -- dd/posix.sh@86 -- # gen_bytes 512 00:24:16.352 21:06:44 -- dd/common.sh@98 -- # xtrace_disable 00:24:16.352 21:06:44 -- common/autotest_common.sh@10 -- # set +x 00:24:16.352 21:06:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:16.352 21:06:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:24:16.352 [2024-06-09 21:06:44.414705] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:16.352 [2024-06-09 21:06:44.414939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129034 ] 00:24:16.610 [2024-06-09 21:06:44.568543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.610 [2024-06-09 21:06:44.734229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.806  Copying: 512/512 [B] (average 500 kBps) 00:24:17.806 00:24:17.806 21:06:45 -- dd/posix.sh@93 -- # [[ fo77hubpmcyoc8d8n3gk7dvl7nijtozm3skloqbo3n81id6pnhgw64phfs4s4oh8hbbq7fugzhohi5fcwpp8ipva7spl3ls6zghtxjbf8noeuz0tlxenhyqnky04j1v1h5ze01folbbo60nhotrzs7gzisqx185pfayo5cvqfioxxemedhia4q15e0ktgkheoh1ft0r6afltf9ro1ktfi6t7zhqodhiycq8t3fd0rn6yujhtg631hiisrgldvvxodtyh4sarmysxl9jh7gbd53dm0wq3iygpnk2488kkckagcktsqriyy0sy9i71ex22vr7ghl3kwd9jlb7z99v1rbbckhux28mik9wog3h1y0lrlfmvtxav2gbkixc3ilggk27oms01gh74rbiyynsvoaro291gob4xfelrem5nkdwa2uoglznu3wnb6sxf8vrj30zu209zaxalix0mqsbvy3kw0jekan4f0uv8m20f501doryibdst2cyco8rzoq7w == \f\o\7\7\h\u\b\p\m\c\y\o\c\8\d\8\n\3\g\k\7\d\v\l\7\n\i\j\t\o\z\m\3\s\k\l\o\q\b\o\3\n\8\1\i\d\6\p\n\h\g\w\6\4\p\h\f\s\4\s\4\o\h\8\h\b\b\q\7\f\u\g\z\h\o\h\i\5\f\c\w\p\p\8\i\p\v\a\7\s\p\l\3\l\s\6\z\g\h\t\x\j\b\f\8\n\o\e\u\z\0\t\l\x\e\n\h\y\q\n\k\y\0\4\j\1\v\1\h\5\z\e\0\1\f\o\l\b\b\o\6\0\n\h\o\t\r\z\s\7\g\z\i\s\q\x\1\8\5\p\f\a\y\o\5\c\v\q\f\i\o\x\x\e\m\e\d\h\i\a\4\q\1\5\e\0\k\t\g\k\h\e\o\h\1\f\t\0\r\6\a\f\l\t\f\9\r\o\1\k\t\f\i\6\t\7\z\h\q\o\d\h\i\y\c\q\8\t\3\f\d\0\r\n\6\y\u\j\h\t\g\6\3\1\h\i\i\s\r\g\l\d\v\v\x\o\d\t\y\h\4\s\a\r\m\y\s\x\l\9\j\h\7\g\b\d\5\3\d\m\0\w\q\3\i\y\g\p\n\k\2\4\8\8\k\k\c\k\a\g\c\k\t\s\q\r\i\y\y\0\s\y\9\i\7\1\e\x\2\2\v\r\7\g\h\l\3\k\w\d\9\j\l\b\7\z\9\9\v\1\r\b\b\c\k\h\u\x\2\8\m\i\k\9\w\o\g\3\h\1\y\0\l\r\l\f\m\v\t\x\a\v\2\g\b\k\i\x\c\3\i\l\g\g\k\2\7\o\m\s\0\1\g\h\7\4\r\b\i\y\y\n\s\v\o\a\r\o\2\9\1\g\o\b\4\x\f\e\l\r\e\m\5\n\k\d\w\a\2\u\o\g\l\z\n\u\3\w\n\b\6\s\x\f\8\v\r\j\3\0\z\u\2\0\9\z\a\x\a\l\i\x\0\m\q\s\b\v\y\3\k\w\0\j\e\k\a\n\4\f\0\u\v\8\m\2\0\f\5\0\1\d\o\r\y\i\b\d\s\t\2\c\y\c\o\8\r\z\o\q\7\w ]] 00:24:17.806 21:06:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:17.807 21:06:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:24:18.065 [2024-06-09 21:06:46.016601] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:18.065 [2024-06-09 21:06:46.016818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129058 ] 00:24:18.065 [2024-06-09 21:06:46.182299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.324 [2024-06-09 21:06:46.344431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.520  Copying: 512/512 [B] (average 500 kBps) 00:24:19.520 00:24:19.520 21:06:47 -- dd/posix.sh@93 -- # [[ fo77hubpmcyoc8d8n3gk7dvl7nijtozm3skloqbo3n81id6pnhgw64phfs4s4oh8hbbq7fugzhohi5fcwpp8ipva7spl3ls6zghtxjbf8noeuz0tlxenhyqnky04j1v1h5ze01folbbo60nhotrzs7gzisqx185pfayo5cvqfioxxemedhia4q15e0ktgkheoh1ft0r6afltf9ro1ktfi6t7zhqodhiycq8t3fd0rn6yujhtg631hiisrgldvvxodtyh4sarmysxl9jh7gbd53dm0wq3iygpnk2488kkckagcktsqriyy0sy9i71ex22vr7ghl3kwd9jlb7z99v1rbbckhux28mik9wog3h1y0lrlfmvtxav2gbkixc3ilggk27oms01gh74rbiyynsvoaro291gob4xfelrem5nkdwa2uoglznu3wnb6sxf8vrj30zu209zaxalix0mqsbvy3kw0jekan4f0uv8m20f501doryibdst2cyco8rzoq7w == \f\o\7\7\h\u\b\p\m\c\y\o\c\8\d\8\n\3\g\k\7\d\v\l\7\n\i\j\t\o\z\m\3\s\k\l\o\q\b\o\3\n\8\1\i\d\6\p\n\h\g\w\6\4\p\h\f\s\4\s\4\o\h\8\h\b\b\q\7\f\u\g\z\h\o\h\i\5\f\c\w\p\p\8\i\p\v\a\7\s\p\l\3\l\s\6\z\g\h\t\x\j\b\f\8\n\o\e\u\z\0\t\l\x\e\n\h\y\q\n\k\y\0\4\j\1\v\1\h\5\z\e\0\1\f\o\l\b\b\o\6\0\n\h\o\t\r\z\s\7\g\z\i\s\q\x\1\8\5\p\f\a\y\o\5\c\v\q\f\i\o\x\x\e\m\e\d\h\i\a\4\q\1\5\e\0\k\t\g\k\h\e\o\h\1\f\t\0\r\6\a\f\l\t\f\9\r\o\1\k\t\f\i\6\t\7\z\h\q\o\d\h\i\y\c\q\8\t\3\f\d\0\r\n\6\y\u\j\h\t\g\6\3\1\h\i\i\s\r\g\l\d\v\v\x\o\d\t\y\h\4\s\a\r\m\y\s\x\l\9\j\h\7\g\b\d\5\3\d\m\0\w\q\3\i\y\g\p\n\k\2\4\8\8\k\k\c\k\a\g\c\k\t\s\q\r\i\y\y\0\s\y\9\i\7\1\e\x\2\2\v\r\7\g\h\l\3\k\w\d\9\j\l\b\7\z\9\9\v\1\r\b\b\c\k\h\u\x\2\8\m\i\k\9\w\o\g\3\h\1\y\0\l\r\l\f\m\v\t\x\a\v\2\g\b\k\i\x\c\3\i\l\g\g\k\2\7\o\m\s\0\1\g\h\7\4\r\b\i\y\y\n\s\v\o\a\r\o\2\9\1\g\o\b\4\x\f\e\l\r\e\m\5\n\k\d\w\a\2\u\o\g\l\z\n\u\3\w\n\b\6\s\x\f\8\v\r\j\3\0\z\u\2\0\9\z\a\x\a\l\i\x\0\m\q\s\b\v\y\3\k\w\0\j\e\k\a\n\4\f\0\u\v\8\m\2\0\f\5\0\1\d\o\r\y\i\b\d\s\t\2\c\y\c\o\8\r\z\o\q\7\w ]] 00:24:19.520 21:06:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:19.520 21:06:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:24:19.520 [2024-06-09 21:06:47.651717] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:19.520 [2024-06-09 21:06:47.651909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129080 ] 00:24:19.779 [2024-06-09 21:06:47.819783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.038 [2024-06-09 21:06:47.991301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.235  Copying: 512/512 [B] (average 125 kBps) 00:24:21.235 00:24:21.235 21:06:49 -- dd/posix.sh@93 -- # [[ fo77hubpmcyoc8d8n3gk7dvl7nijtozm3skloqbo3n81id6pnhgw64phfs4s4oh8hbbq7fugzhohi5fcwpp8ipva7spl3ls6zghtxjbf8noeuz0tlxenhyqnky04j1v1h5ze01folbbo60nhotrzs7gzisqx185pfayo5cvqfioxxemedhia4q15e0ktgkheoh1ft0r6afltf9ro1ktfi6t7zhqodhiycq8t3fd0rn6yujhtg631hiisrgldvvxodtyh4sarmysxl9jh7gbd53dm0wq3iygpnk2488kkckagcktsqriyy0sy9i71ex22vr7ghl3kwd9jlb7z99v1rbbckhux28mik9wog3h1y0lrlfmvtxav2gbkixc3ilggk27oms01gh74rbiyynsvoaro291gob4xfelrem5nkdwa2uoglznu3wnb6sxf8vrj30zu209zaxalix0mqsbvy3kw0jekan4f0uv8m20f501doryibdst2cyco8rzoq7w == \f\o\7\7\h\u\b\p\m\c\y\o\c\8\d\8\n\3\g\k\7\d\v\l\7\n\i\j\t\o\z\m\3\s\k\l\o\q\b\o\3\n\8\1\i\d\6\p\n\h\g\w\6\4\p\h\f\s\4\s\4\o\h\8\h\b\b\q\7\f\u\g\z\h\o\h\i\5\f\c\w\p\p\8\i\p\v\a\7\s\p\l\3\l\s\6\z\g\h\t\x\j\b\f\8\n\o\e\u\z\0\t\l\x\e\n\h\y\q\n\k\y\0\4\j\1\v\1\h\5\z\e\0\1\f\o\l\b\b\o\6\0\n\h\o\t\r\z\s\7\g\z\i\s\q\x\1\8\5\p\f\a\y\o\5\c\v\q\f\i\o\x\x\e\m\e\d\h\i\a\4\q\1\5\e\0\k\t\g\k\h\e\o\h\1\f\t\0\r\6\a\f\l\t\f\9\r\o\1\k\t\f\i\6\t\7\z\h\q\o\d\h\i\y\c\q\8\t\3\f\d\0\r\n\6\y\u\j\h\t\g\6\3\1\h\i\i\s\r\g\l\d\v\v\x\o\d\t\y\h\4\s\a\r\m\y\s\x\l\9\j\h\7\g\b\d\5\3\d\m\0\w\q\3\i\y\g\p\n\k\2\4\8\8\k\k\c\k\a\g\c\k\t\s\q\r\i\y\y\0\s\y\9\i\7\1\e\x\2\2\v\r\7\g\h\l\3\k\w\d\9\j\l\b\7\z\9\9\v\1\r\b\b\c\k\h\u\x\2\8\m\i\k\9\w\o\g\3\h\1\y\0\l\r\l\f\m\v\t\x\a\v\2\g\b\k\i\x\c\3\i\l\g\g\k\2\7\o\m\s\0\1\g\h\7\4\r\b\i\y\y\n\s\v\o\a\r\o\2\9\1\g\o\b\4\x\f\e\l\r\e\m\5\n\k\d\w\a\2\u\o\g\l\z\n\u\3\w\n\b\6\s\x\f\8\v\r\j\3\0\z\u\2\0\9\z\a\x\a\l\i\x\0\m\q\s\b\v\y\3\k\w\0\j\e\k\a\n\4\f\0\u\v\8\m\2\0\f\5\0\1\d\o\r\y\i\b\d\s\t\2\c\y\c\o\8\r\z\o\q\7\w ]] 00:24:21.235 21:06:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:21.235 21:06:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:24:21.235 [2024-06-09 21:06:49.263861] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:21.235 [2024-06-09 21:06:49.264039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129104 ] 00:24:21.494 [2024-06-09 21:06:49.416631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.494 [2024-06-09 21:06:49.590050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.689  Copying: 512/512 [B] (average 250 kBps) 00:24:22.689 00:24:22.689 ************************************ 00:24:22.689 END TEST dd_flags_misc_forced_aio 00:24:22.689 ************************************ 00:24:22.689 21:06:50 -- dd/posix.sh@93 -- # [[ fo77hubpmcyoc8d8n3gk7dvl7nijtozm3skloqbo3n81id6pnhgw64phfs4s4oh8hbbq7fugzhohi5fcwpp8ipva7spl3ls6zghtxjbf8noeuz0tlxenhyqnky04j1v1h5ze01folbbo60nhotrzs7gzisqx185pfayo5cvqfioxxemedhia4q15e0ktgkheoh1ft0r6afltf9ro1ktfi6t7zhqodhiycq8t3fd0rn6yujhtg631hiisrgldvvxodtyh4sarmysxl9jh7gbd53dm0wq3iygpnk2488kkckagcktsqriyy0sy9i71ex22vr7ghl3kwd9jlb7z99v1rbbckhux28mik9wog3h1y0lrlfmvtxav2gbkixc3ilggk27oms01gh74rbiyynsvoaro291gob4xfelrem5nkdwa2uoglznu3wnb6sxf8vrj30zu209zaxalix0mqsbvy3kw0jekan4f0uv8m20f501doryibdst2cyco8rzoq7w == \f\o\7\7\h\u\b\p\m\c\y\o\c\8\d\8\n\3\g\k\7\d\v\l\7\n\i\j\t\o\z\m\3\s\k\l\o\q\b\o\3\n\8\1\i\d\6\p\n\h\g\w\6\4\p\h\f\s\4\s\4\o\h\8\h\b\b\q\7\f\u\g\z\h\o\h\i\5\f\c\w\p\p\8\i\p\v\a\7\s\p\l\3\l\s\6\z\g\h\t\x\j\b\f\8\n\o\e\u\z\0\t\l\x\e\n\h\y\q\n\k\y\0\4\j\1\v\1\h\5\z\e\0\1\f\o\l\b\b\o\6\0\n\h\o\t\r\z\s\7\g\z\i\s\q\x\1\8\5\p\f\a\y\o\5\c\v\q\f\i\o\x\x\e\m\e\d\h\i\a\4\q\1\5\e\0\k\t\g\k\h\e\o\h\1\f\t\0\r\6\a\f\l\t\f\9\r\o\1\k\t\f\i\6\t\7\z\h\q\o\d\h\i\y\c\q\8\t\3\f\d\0\r\n\6\y\u\j\h\t\g\6\3\1\h\i\i\s\r\g\l\d\v\v\x\o\d\t\y\h\4\s\a\r\m\y\s\x\l\9\j\h\7\g\b\d\5\3\d\m\0\w\q\3\i\y\g\p\n\k\2\4\8\8\k\k\c\k\a\g\c\k\t\s\q\r\i\y\y\0\s\y\9\i\7\1\e\x\2\2\v\r\7\g\h\l\3\k\w\d\9\j\l\b\7\z\9\9\v\1\r\b\b\c\k\h\u\x\2\8\m\i\k\9\w\o\g\3\h\1\y\0\l\r\l\f\m\v\t\x\a\v\2\g\b\k\i\x\c\3\i\l\g\g\k\2\7\o\m\s\0\1\g\h\7\4\r\b\i\y\y\n\s\v\o\a\r\o\2\9\1\g\o\b\4\x\f\e\l\r\e\m\5\n\k\d\w\a\2\u\o\g\l\z\n\u\3\w\n\b\6\s\x\f\8\v\r\j\3\0\z\u\2\0\9\z\a\x\a\l\i\x\0\m\q\s\b\v\y\3\k\w\0\j\e\k\a\n\4\f\0\u\v\8\m\2\0\f\5\0\1\d\o\r\y\i\b\d\s\t\2\c\y\c\o\8\r\z\o\q\7\w ]] 00:24:22.689 00:24:22.689 real 0m13.355s 00:24:22.689 user 0m10.407s 00:24:22.689 sys 0m1.857s 00:24:22.689 21:06:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:22.689 21:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:22.948 21:06:50 -- dd/posix.sh@1 -- # cleanup 00:24:22.948 21:06:50 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:24:22.948 21:06:50 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:24:22.948 00:24:22.948 real 0m55.770s 00:24:22.948 user 0m42.117s 00:24:22.948 sys 0m7.535s 00:24:22.948 21:06:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:22.948 21:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:22.948 ************************************ 00:24:22.948 END TEST spdk_dd_posix 00:24:22.948 ************************************ 00:24:22.948 21:06:50 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:24:22.948 21:06:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:22.948 21:06:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:22.948 21:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:22.948 ************************************ 00:24:22.948 START TEST spdk_dd_malloc 00:24:22.948 ************************************ 00:24:22.948 21:06:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:24:22.948 * Looking for test storage... 00:24:22.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:22.948 21:06:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:22.948 21:06:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.948 21:06:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.948 21:06:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.948 21:06:50 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:22.948 21:06:50 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:22.948 21:06:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:22.948 21:06:50 -- paths/export.sh@5 -- # export PATH 00:24:22.948 21:06:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:22.948 21:06:50 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:24:22.948 21:06:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:22.948 21:06:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:22.948 21:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:22.948 ************************************ 00:24:22.948 START TEST dd_malloc_copy 00:24:22.948 ************************************ 00:24:22.948 21:06:51 -- common/autotest_common.sh@1104 -- # malloc_copy 00:24:22.948 21:06:51 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:24:22.948 21:06:51 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:24:22.948 21:06:51 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:24:22.948 21:06:51 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:24:22.948 21:06:51 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:24:22.948 21:06:51 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:24:22.948 21:06:51 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:24:22.948 21:06:51 -- dd/malloc.sh@28 -- # gen_conf 00:24:22.948 21:06:51 -- dd/common.sh@31 -- # xtrace_disable 00:24:22.948 21:06:51 -- common/autotest_common.sh@10 -- # set +x 00:24:22.948 { 00:24:22.948 "subsystems": [ 00:24:22.948 { 00:24:22.948 "subsystem": "bdev", 00:24:22.948 "config": [ 00:24:22.948 { 00:24:22.948 "params": { 00:24:22.948 "block_size": 512, 00:24:22.948 "num_blocks": 1048576, 00:24:22.948 "name": "malloc0" 00:24:22.948 }, 00:24:22.948 "method": "bdev_malloc_create" 00:24:22.948 }, 00:24:22.948 { 00:24:22.948 "params": { 00:24:22.948 "block_size": 512, 00:24:22.948 "num_blocks": 1048576, 00:24:22.948 "name": "malloc1" 00:24:22.948 }, 00:24:22.948 "method": "bdev_malloc_create" 00:24:22.948 }, 00:24:22.948 { 00:24:22.948 "method": "bdev_wait_for_examine" 00:24:22.948 } 00:24:22.948 ] 00:24:22.948 } 00:24:22.948 ] 00:24:22.948 } 00:24:22.948 [2024-06-09 21:06:51.071422] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:22.948 [2024-06-09 21:06:51.071611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129200 ] 00:24:23.207 [2024-06-09 21:06:51.240317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.465 [2024-06-09 21:06:51.399375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.107  Copying: 215/512 [MB] (215 MBps) Copying: 428/512 [MB] (213 MBps) Copying: 512/512 [MB] (average 214 MBps) 00:24:30.107 00:24:30.107 21:06:58 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:24:30.107 21:06:58 -- dd/malloc.sh@33 -- # gen_conf 00:24:30.107 21:06:58 -- dd/common.sh@31 -- # xtrace_disable 00:24:30.107 21:06:58 -- common/autotest_common.sh@10 -- # set +x 00:24:30.107 { 00:24:30.107 "subsystems": [ 00:24:30.107 { 00:24:30.107 "subsystem": "bdev", 00:24:30.107 "config": [ 00:24:30.107 { 00:24:30.107 "params": { 00:24:30.107 "block_size": 512, 00:24:30.107 "num_blocks": 1048576, 00:24:30.107 "name": "malloc0" 00:24:30.107 }, 00:24:30.107 "method": "bdev_malloc_create" 00:24:30.107 }, 00:24:30.107 { 00:24:30.107 "params": { 00:24:30.107 "block_size": 512, 00:24:30.107 "num_blocks": 1048576, 00:24:30.107 "name": "malloc1" 00:24:30.107 }, 00:24:30.107 "method": "bdev_malloc_create" 00:24:30.107 }, 00:24:30.107 { 00:24:30.107 "method": "bdev_wait_for_examine" 00:24:30.107 } 00:24:30.107 ] 00:24:30.107 } 00:24:30.107 ] 00:24:30.107 } 00:24:30.107 [2024-06-09 21:06:58.073935] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:30.107 [2024-06-09 21:06:58.074132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129290 ] 00:24:30.107 [2024-06-09 21:06:58.238117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.374 [2024-06-09 21:06:58.420183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.331  Copying: 217/512 [MB] (217 MBps) Copying: 434/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 216 MBps) 00:24:37.331 00:24:37.331 00:24:37.331 real 0m13.996s 00:24:37.331 user 0m12.779s 00:24:37.331 sys 0m1.086s 00:24:37.331 21:07:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.331 21:07:05 -- common/autotest_common.sh@10 -- # set +x 00:24:37.331 ************************************ 00:24:37.331 END TEST dd_malloc_copy 00:24:37.331 ************************************ 00:24:37.331 00:24:37.331 real 0m14.122s 00:24:37.331 user 0m12.844s 00:24:37.331 sys 0m1.153s 00:24:37.331 21:07:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.331 21:07:05 -- common/autotest_common.sh@10 -- # set +x 00:24:37.331 ************************************ 00:24:37.331 END TEST spdk_dd_malloc 00:24:37.331 ************************************ 00:24:37.331 21:07:05 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:24:37.331 21:07:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:37.331 21:07:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:37.331 21:07:05 -- common/autotest_common.sh@10 -- # set +x 00:24:37.331 ************************************ 00:24:37.331 START TEST spdk_dd_bdev_to_bdev 00:24:37.332 ************************************ 00:24:37.332 21:07:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:24:37.332 * Looking for test storage... 00:24:37.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:37.332 21:07:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:37.332 21:07:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.332 21:07:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.332 21:07:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.332 21:07:05 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:37.332 21:07:05 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:37.332 21:07:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:37.332 21:07:05 -- paths/export.sh@5 -- # export PATH 00:24:37.332 21:07:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:24:37.332 21:07:05 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:24:37.332 [2024-06-09 21:07:05.232217] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:37.332 [2024-06-09 21:07:05.232426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129438 ] 00:24:37.332 [2024-06-09 21:07:05.399392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.590 [2024-06-09 21:07:05.583332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.089  Copying: 256/256 [MB] (average 1368 MBps) 00:24:39.089 00:24:39.089 21:07:07 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:39.089 21:07:07 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:39.089 21:07:07 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:24:39.089 21:07:07 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:24:39.089 21:07:07 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:24:39.089 21:07:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:24:39.089 21:07:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:39.089 21:07:07 -- common/autotest_common.sh@10 -- # set +x 00:24:39.089 ************************************ 00:24:39.089 START TEST dd_inflate_file 00:24:39.089 ************************************ 00:24:39.089 21:07:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:24:39.089 [2024-06-09 21:07:07.076695] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:39.089 [2024-06-09 21:07:07.076890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129473 ] 00:24:39.089 [2024-06-09 21:07:07.244751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.347 [2024-06-09 21:07:07.418165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.574  Copying: 64/64 [MB] (average 1280 MBps) 00:24:40.574 00:24:40.574 00:24:40.574 real 0m1.705s 00:24:40.574 user 0m1.328s 00:24:40.574 sys 0m0.248s 00:24:40.574 21:07:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.574 ************************************ 00:24:40.574 END TEST dd_inflate_file 00:24:40.574 ************************************ 00:24:40.574 21:07:08 -- common/autotest_common.sh@10 -- # set +x 00:24:40.833 21:07:08 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:24:40.833 21:07:08 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:24:40.833 21:07:08 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:24:40.833 21:07:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:40.833 21:07:08 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:24:40.833 21:07:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:40.833 21:07:08 -- common/autotest_common.sh@10 -- # set +x 00:24:40.833 21:07:08 -- dd/common.sh@31 -- # xtrace_disable 00:24:40.833 21:07:08 -- common/autotest_common.sh@10 -- # set +x 00:24:40.833 ************************************ 00:24:40.833 START TEST dd_copy_to_out_bdev 00:24:40.833 ************************************ 00:24:40.833 21:07:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:24:40.833 { 00:24:40.833 "subsystems": [ 00:24:40.833 { 00:24:40.833 "subsystem": "bdev", 00:24:40.833 "config": [ 00:24:40.833 { 00:24:40.833 "params": { 00:24:40.833 "block_size": 4096, 00:24:40.833 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:40.833 "name": "aio1" 00:24:40.833 }, 00:24:40.833 "method": "bdev_aio_create" 00:24:40.833 }, 00:24:40.833 { 00:24:40.833 "params": { 00:24:40.833 "trtype": "pcie", 00:24:40.833 "traddr": "0000:00:06.0", 00:24:40.833 "name": "Nvme0" 00:24:40.833 }, 00:24:40.833 "method": "bdev_nvme_attach_controller" 00:24:40.833 }, 00:24:40.833 { 00:24:40.833 "method": "bdev_wait_for_examine" 00:24:40.833 } 00:24:40.833 ] 00:24:40.833 } 00:24:40.833 ] 00:24:40.833 } 00:24:40.833 [2024-06-09 21:07:08.841750] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:40.833 [2024-06-09 21:07:08.841982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129520 ] 00:24:40.833 [2024-06-09 21:07:09.009353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.091 [2024-06-09 21:07:09.178887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.987  Copying: 45/64 [MB] (45 MBps) Copying: 64/64 [MB] (average 45 MBps) 00:24:43.987 00:24:43.987 00:24:43.987 real 0m3.244s 00:24:43.987 user 0m2.893s 00:24:43.987 sys 0m0.251s 00:24:43.987 21:07:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.987 ************************************ 00:24:43.987 END TEST dd_copy_to_out_bdev 00:24:43.987 21:07:12 -- common/autotest_common.sh@10 -- # set +x 00:24:43.987 ************************************ 00:24:43.987 21:07:12 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:24:43.987 21:07:12 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:24:43.987 21:07:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:43.987 21:07:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:43.987 21:07:12 -- common/autotest_common.sh@10 -- # set +x 00:24:43.988 ************************************ 00:24:43.988 START TEST dd_offset_magic 00:24:43.988 ************************************ 00:24:43.988 21:07:12 -- common/autotest_common.sh@1104 -- # offset_magic 00:24:43.988 21:07:12 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:24:43.988 21:07:12 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:24:43.988 21:07:12 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:24:43.988 21:07:12 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:24:43.988 21:07:12 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:24:43.988 21:07:12 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:24:43.988 21:07:12 -- dd/common.sh@31 -- # xtrace_disable 00:24:43.988 21:07:12 -- common/autotest_common.sh@10 -- # set +x 00:24:43.988 [2024-06-09 21:07:12.132106] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:43.988 [2024-06-09 21:07:12.132285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129590 ] 00:24:43.988 { 00:24:43.988 "subsystems": [ 00:24:43.988 { 00:24:43.988 "subsystem": "bdev", 00:24:43.988 "config": [ 00:24:43.988 { 00:24:43.988 "params": { 00:24:43.988 "block_size": 4096, 00:24:43.988 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:43.988 "name": "aio1" 00:24:43.988 }, 00:24:43.988 "method": "bdev_aio_create" 00:24:43.988 }, 00:24:43.988 { 00:24:43.988 "params": { 00:24:43.988 "trtype": "pcie", 00:24:43.988 "traddr": "0000:00:06.0", 00:24:43.988 "name": "Nvme0" 00:24:43.988 }, 00:24:43.988 "method": "bdev_nvme_attach_controller" 00:24:43.988 }, 00:24:43.988 { 00:24:43.988 "method": "bdev_wait_for_examine" 00:24:43.988 } 00:24:43.988 ] 00:24:43.988 } 00:24:43.988 ] 00:24:43.988 } 00:24:44.245 [2024-06-09 21:07:12.285798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.501 [2024-06-09 21:07:12.441398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.367  Copying: 65/65 [MB] (average 138 MBps) 00:24:46.367 00:24:46.367 21:07:14 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:24:46.367 21:07:14 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:24:46.367 21:07:14 -- dd/common.sh@31 -- # xtrace_disable 00:24:46.367 21:07:14 -- common/autotest_common.sh@10 -- # set +x 00:24:46.367 { 00:24:46.367 "subsystems": [ 00:24:46.367 { 00:24:46.367 "subsystem": "bdev", 00:24:46.367 "config": [ 00:24:46.367 { 00:24:46.367 "params": { 00:24:46.367 "block_size": 4096, 00:24:46.367 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:46.367 "name": "aio1" 00:24:46.367 }, 00:24:46.367 "method": "bdev_aio_create" 00:24:46.367 }, 00:24:46.367 { 00:24:46.367 "params": { 00:24:46.367 "trtype": "pcie", 00:24:46.367 "traddr": "0000:00:06.0", 00:24:46.367 "name": "Nvme0" 00:24:46.367 }, 00:24:46.367 "method": "bdev_nvme_attach_controller" 00:24:46.367 }, 00:24:46.367 { 00:24:46.367 "method": "bdev_wait_for_examine" 00:24:46.367 } 00:24:46.367 ] 00:24:46.367 } 00:24:46.367 ] 00:24:46.367 } 00:24:46.367 [2024-06-09 21:07:14.501917] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:46.367 [2024-06-09 21:07:14.502103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129628 ] 00:24:46.625 [2024-06-09 21:07:14.668932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.883 [2024-06-09 21:07:14.827322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.075  Copying: 1024/1024 [kB] (average 500 MBps) 00:24:48.075 00:24:48.075 21:07:16 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:24:48.075 21:07:16 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:24:48.075 21:07:16 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:24:48.075 21:07:16 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:24:48.076 21:07:16 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:24:48.076 21:07:16 -- dd/common.sh@31 -- # xtrace_disable 00:24:48.076 21:07:16 -- common/autotest_common.sh@10 -- # set +x 00:24:48.334 { 00:24:48.334 "subsystems": [ 00:24:48.334 { 00:24:48.334 "subsystem": "bdev", 00:24:48.334 "config": [ 00:24:48.334 { 00:24:48.334 "params": { 00:24:48.334 "block_size": 4096, 00:24:48.334 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:48.334 "name": "aio1" 00:24:48.334 }, 00:24:48.334 "method": "bdev_aio_create" 00:24:48.334 }, 00:24:48.334 { 00:24:48.334 "params": { 00:24:48.334 "trtype": "pcie", 00:24:48.334 "traddr": "0000:00:06.0", 00:24:48.334 "name": "Nvme0" 00:24:48.334 }, 00:24:48.334 "method": "bdev_nvme_attach_controller" 00:24:48.334 }, 00:24:48.334 { 00:24:48.334 "method": "bdev_wait_for_examine" 00:24:48.334 } 00:24:48.334 ] 00:24:48.334 } 00:24:48.334 ] 00:24:48.334 } 00:24:48.334 [2024-06-09 21:07:16.311791] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:48.334 [2024-06-09 21:07:16.312042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129659 ] 00:24:48.334 [2024-06-09 21:07:16.482980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.592 [2024-06-09 21:07:16.676621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.091  Copying: 65/65 [MB] (average 1354 MBps) 00:24:50.091 00:24:50.091 21:07:18 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:24:50.091 21:07:18 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:24:50.091 21:07:18 -- dd/common.sh@31 -- # xtrace_disable 00:24:50.091 21:07:18 -- common/autotest_common.sh@10 -- # set +x 00:24:50.091 { 00:24:50.091 "subsystems": [ 00:24:50.091 { 00:24:50.091 "subsystem": "bdev", 00:24:50.091 "config": [ 00:24:50.091 { 00:24:50.091 "params": { 00:24:50.091 "block_size": 4096, 00:24:50.091 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:50.091 "name": "aio1" 00:24:50.091 }, 00:24:50.091 "method": "bdev_aio_create" 00:24:50.091 }, 00:24:50.091 { 00:24:50.091 "params": { 00:24:50.091 "trtype": "pcie", 00:24:50.091 "traddr": "0000:00:06.0", 00:24:50.091 "name": "Nvme0" 00:24:50.091 }, 00:24:50.091 "method": "bdev_nvme_attach_controller" 00:24:50.091 }, 00:24:50.091 { 00:24:50.091 "method": "bdev_wait_for_examine" 00:24:50.091 } 00:24:50.091 ] 00:24:50.091 } 00:24:50.091 ] 00:24:50.091 } 00:24:50.091 [2024-06-09 21:07:18.154532] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:50.092 [2024-06-09 21:07:18.154716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129686 ] 00:24:50.350 [2024-06-09 21:07:18.320649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.350 [2024-06-09 21:07:18.502206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.850  Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:51.850 00:24:51.850 21:07:19 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:24:51.850 21:07:19 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:24:51.850 00:24:51.850 real 0m7.823s 00:24:51.850 user 0m5.903s 00:24:51.850 sys 0m1.054s 00:24:51.850 21:07:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.850 ************************************ 00:24:51.850 END TEST dd_offset_magic 00:24:51.850 ************************************ 00:24:51.850 21:07:19 -- common/autotest_common.sh@10 -- # set +x 00:24:51.850 21:07:19 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:24:51.850 21:07:19 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:24:51.850 21:07:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:51.850 21:07:19 -- dd/common.sh@11 -- # local nvme_ref= 00:24:51.850 21:07:19 -- dd/common.sh@12 -- # local size=4194330 00:24:51.850 21:07:19 -- dd/common.sh@14 -- # local bs=1048576 00:24:51.850 21:07:19 -- dd/common.sh@15 -- # local count=5 00:24:51.850 21:07:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:24:51.850 21:07:19 -- dd/common.sh@18 -- # gen_conf 00:24:51.850 21:07:19 -- dd/common.sh@31 -- # xtrace_disable 00:24:51.851 21:07:19 -- common/autotest_common.sh@10 -- # set +x 00:24:51.851 [2024-06-09 21:07:19.994080] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:51.851 [2024-06-09 21:07:19.994243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129730 ] 00:24:51.851 { 00:24:51.851 "subsystems": [ 00:24:51.851 { 00:24:51.851 "subsystem": "bdev", 00:24:51.851 "config": [ 00:24:51.851 { 00:24:51.851 "params": { 00:24:51.851 "block_size": 4096, 00:24:51.851 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:51.851 "name": "aio1" 00:24:51.851 }, 00:24:51.851 "method": "bdev_aio_create" 00:24:51.851 }, 00:24:51.851 { 00:24:51.851 "params": { 00:24:51.851 "trtype": "pcie", 00:24:51.851 "traddr": "0000:00:06.0", 00:24:51.851 "name": "Nvme0" 00:24:51.851 }, 00:24:51.851 "method": "bdev_nvme_attach_controller" 00:24:51.851 }, 00:24:51.851 { 00:24:51.851 "method": "bdev_wait_for_examine" 00:24:51.851 } 00:24:51.851 ] 00:24:51.851 } 00:24:51.851 ] 00:24:51.851 } 00:24:52.109 [2024-06-09 21:07:20.152066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.367 [2024-06-09 21:07:20.329060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.631  Copying: 5120/5120 [kB] (average 1250 MBps) 00:24:53.631 00:24:53.631 21:07:21 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:24:53.631 21:07:21 -- dd/common.sh@10 -- # local bdev=aio1 00:24:53.631 21:07:21 -- dd/common.sh@11 -- # local nvme_ref= 00:24:53.631 21:07:21 -- dd/common.sh@12 -- # local size=4194330 00:24:53.631 21:07:21 -- dd/common.sh@14 -- # local bs=1048576 00:24:53.631 21:07:21 -- dd/common.sh@15 -- # local count=5 00:24:53.631 21:07:21 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:24:53.631 21:07:21 -- dd/common.sh@18 -- # gen_conf 00:24:53.631 21:07:21 -- dd/common.sh@31 -- # xtrace_disable 00:24:53.631 21:07:21 -- common/autotest_common.sh@10 -- # set +x 00:24:53.631 [2024-06-09 21:07:21.757454] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:53.631 [2024-06-09 21:07:21.757659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129764 ] 00:24:53.631 { 00:24:53.631 "subsystems": [ 00:24:53.631 { 00:24:53.631 "subsystem": "bdev", 00:24:53.631 "config": [ 00:24:53.631 { 00:24:53.631 "params": { 00:24:53.631 "block_size": 4096, 00:24:53.631 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:53.631 "name": "aio1" 00:24:53.631 }, 00:24:53.631 "method": "bdev_aio_create" 00:24:53.631 }, 00:24:53.631 { 00:24:53.631 "params": { 00:24:53.631 "trtype": "pcie", 00:24:53.631 "traddr": "0000:00:06.0", 00:24:53.631 "name": "Nvme0" 00:24:53.631 }, 00:24:53.631 "method": "bdev_nvme_attach_controller" 00:24:53.631 }, 00:24:53.631 { 00:24:53.631 "method": "bdev_wait_for_examine" 00:24:53.631 } 00:24:53.631 ] 00:24:53.631 } 00:24:53.631 ] 00:24:53.631 } 00:24:53.889 [2024-06-09 21:07:21.912585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.147 [2024-06-09 21:07:22.081920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.340  Copying: 5120/5120 [kB] (average 1000 MBps) 00:24:55.340 00:24:55.340 21:07:23 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:24:55.340 00:24:55.340 real 0m18.416s 00:24:55.340 user 0m14.322s 00:24:55.340 sys 0m2.632s 00:24:55.340 21:07:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.340 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:55.340 ************************************ 00:24:55.340 END TEST spdk_dd_bdev_to_bdev 00:24:55.340 ************************************ 00:24:55.600 21:07:23 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:24:55.600 21:07:23 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:24:55.600 21:07:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:55.600 21:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.600 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:55.600 ************************************ 00:24:55.600 START TEST spdk_dd_sparse 00:24:55.600 ************************************ 00:24:55.600 21:07:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:24:55.600 * Looking for test storage... 00:24:55.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:55.600 21:07:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.600 21:07:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.600 21:07:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.600 21:07:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.600 21:07:23 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.600 21:07:23 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.600 21:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.600 21:07:23 -- paths/export.sh@5 -- # export PATH 00:24:55.600 21:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.600 21:07:23 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:24:55.600 21:07:23 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:24:55.600 21:07:23 -- dd/sparse.sh@110 -- # file1=file_zero1 00:24:55.600 21:07:23 -- dd/sparse.sh@111 -- # file2=file_zero2 00:24:55.600 21:07:23 -- dd/sparse.sh@112 -- # file3=file_zero3 00:24:55.600 21:07:23 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:24:55.600 21:07:23 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:24:55.600 21:07:23 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:24:55.600 21:07:23 -- dd/sparse.sh@118 -- # prepare 00:24:55.600 21:07:23 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:24:55.600 21:07:23 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:24:55.600 1+0 records in 00:24:55.600 1+0 records out 00:24:55.600 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00874199 s, 480 MB/s 00:24:55.600 21:07:23 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:24:55.600 1+0 records in 00:24:55.600 1+0 records out 00:24:55.600 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00867558 s, 483 MB/s 00:24:55.600 21:07:23 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:24:55.600 1+0 records in 00:24:55.600 1+0 records out 00:24:55.600 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00823679 s, 509 MB/s 00:24:55.600 21:07:23 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:24:55.600 21:07:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:55.600 21:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.600 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:55.600 ************************************ 00:24:55.600 START TEST dd_sparse_file_to_file 00:24:55.600 ************************************ 00:24:55.600 21:07:23 -- common/autotest_common.sh@1104 -- # file_to_file 00:24:55.600 21:07:23 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:24:55.600 21:07:23 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:24:55.600 21:07:23 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:24:55.600 21:07:23 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:24:55.600 21:07:23 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:24:55.600 21:07:23 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:24:55.600 21:07:23 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:24:55.600 21:07:23 -- dd/sparse.sh@41 -- # gen_conf 00:24:55.600 21:07:23 -- dd/common.sh@31 -- # xtrace_disable 00:24:55.600 21:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:55.600 { 00:24:55.600 "subsystems": [ 00:24:55.600 { 00:24:55.600 "subsystem": "bdev", 00:24:55.600 "config": [ 00:24:55.600 { 00:24:55.600 "params": { 00:24:55.600 "block_size": 4096, 00:24:55.600 "filename": "dd_sparse_aio_disk", 00:24:55.600 "name": "dd_aio" 00:24:55.600 }, 00:24:55.600 "method": "bdev_aio_create" 00:24:55.600 }, 00:24:55.600 { 00:24:55.600 "params": { 00:24:55.600 "lvs_name": "dd_lvstore", 00:24:55.600 "bdev_name": "dd_aio" 00:24:55.600 }, 00:24:55.600 "method": "bdev_lvol_create_lvstore" 00:24:55.600 }, 00:24:55.600 { 00:24:55.600 "method": "bdev_wait_for_examine" 00:24:55.600 } 00:24:55.600 ] 00:24:55.600 } 00:24:55.600 ] 00:24:55.600 } 00:24:55.600 [2024-06-09 21:07:23.754424] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:55.600 [2024-06-09 21:07:23.755314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129845 ] 00:24:55.859 [2024-06-09 21:07:23.918307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.117 [2024-06-09 21:07:24.097671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.753  Copying: 12/36 [MB] (average 1000 MBps) 00:24:57.753 00:24:57.753 21:07:25 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:24:57.753 21:07:25 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:24:57.753 21:07:25 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:24:57.753 21:07:25 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:24:57.753 21:07:25 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:24:57.753 21:07:25 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:24:57.753 21:07:25 -- dd/sparse.sh@52 -- # stat1_b=24576 00:24:57.753 21:07:25 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:24:57.753 21:07:25 -- dd/sparse.sh@53 -- # stat2_b=24576 00:24:57.753 21:07:25 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:24:57.753 00:24:57.753 real 0m1.896s 00:24:57.753 user 0m1.473s 00:24:57.753 sys 0m0.282s 00:24:57.753 21:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.753 21:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:57.753 ************************************ 00:24:57.753 END TEST dd_sparse_file_to_file 00:24:57.753 ************************************ 00:24:57.753 21:07:25 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:24:57.753 21:07:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:57.753 21:07:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.753 21:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:57.753 ************************************ 00:24:57.753 START TEST dd_sparse_file_to_bdev 00:24:57.753 ************************************ 00:24:57.753 21:07:25 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:24:57.753 21:07:25 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:24:57.753 21:07:25 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:24:57.753 21:07:25 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:24:57.754 21:07:25 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:24:57.754 21:07:25 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:24:57.754 21:07:25 -- dd/sparse.sh@73 -- # gen_conf 00:24:57.754 21:07:25 -- dd/common.sh@31 -- # xtrace_disable 00:24:57.754 21:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:57.754 [2024-06-09 21:07:25.713087] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:57.754 [2024-06-09 21:07:25.713381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129912 ] 00:24:57.754 { 00:24:57.754 "subsystems": [ 00:24:57.754 { 00:24:57.754 "subsystem": "bdev", 00:24:57.754 "config": [ 00:24:57.754 { 00:24:57.754 "params": { 00:24:57.754 "block_size": 4096, 00:24:57.754 "filename": "dd_sparse_aio_disk", 00:24:57.754 "name": "dd_aio" 00:24:57.754 }, 00:24:57.754 "method": "bdev_aio_create" 00:24:57.754 }, 00:24:57.754 { 00:24:57.754 "params": { 00:24:57.754 "lvs_name": "dd_lvstore", 00:24:57.754 "lvol_name": "dd_lvol", 00:24:57.754 "size": 37748736, 00:24:57.754 "thin_provision": true 00:24:57.754 }, 00:24:57.754 "method": "bdev_lvol_create" 00:24:57.754 }, 00:24:57.754 { 00:24:57.754 "method": "bdev_wait_for_examine" 00:24:57.754 } 00:24:57.754 ] 00:24:57.754 } 00:24:57.754 ] 00:24:57.754 } 00:24:57.754 [2024-06-09 21:07:25.881824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.012 [2024-06-09 21:07:26.061065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.271 [2024-06-09 21:07:26.347528] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:58.271  Copying: 12/36 [MB] (average 545 MBps)[2024-06-09 21:07:26.404682] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:59.648 00:24:59.648 00:24:59.648 00:24:59.648 real 0m1.852s 00:24:59.648 user 0m1.544s 00:24:59.648 sys 0m0.233s 00:24:59.648 21:07:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:59.648 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:24:59.648 ************************************ 00:24:59.648 END TEST dd_sparse_file_to_bdev 00:24:59.648 ************************************ 00:24:59.648 21:07:27 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:24:59.648 21:07:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:59.648 21:07:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:59.648 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:24:59.648 ************************************ 00:24:59.648 START TEST dd_sparse_bdev_to_file 00:24:59.648 ************************************ 00:24:59.648 21:07:27 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:24:59.648 21:07:27 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:24:59.648 21:07:27 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:24:59.648 21:07:27 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:24:59.648 21:07:27 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:24:59.648 21:07:27 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:24:59.648 21:07:27 -- dd/sparse.sh@91 -- # gen_conf 00:24:59.648 21:07:27 -- dd/common.sh@31 -- # xtrace_disable 00:24:59.648 21:07:27 -- common/autotest_common.sh@10 -- # set +x 00:24:59.648 { 00:24:59.648 "subsystems": [ 00:24:59.648 { 00:24:59.648 "subsystem": "bdev", 00:24:59.648 "config": [ 00:24:59.648 { 00:24:59.648 "params": { 00:24:59.648 "block_size": 4096, 00:24:59.648 "filename": "dd_sparse_aio_disk", 00:24:59.648 "name": "dd_aio" 00:24:59.648 }, 00:24:59.648 "method": "bdev_aio_create" 00:24:59.648 }, 00:24:59.648 { 00:24:59.648 "method": "bdev_wait_for_examine" 00:24:59.648 } 00:24:59.648 ] 00:24:59.648 } 00:24:59.648 ] 00:24:59.648 } 00:24:59.648 [2024-06-09 21:07:27.590948] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:59.648 [2024-06-09 21:07:27.591138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129970 ] 00:24:59.648 [2024-06-09 21:07:27.757747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.907 [2024-06-09 21:07:27.914340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.541  Copying: 12/36 [MB] (average 1000 MBps) 00:25:01.541 00:25:01.541 21:07:29 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:25:01.541 21:07:29 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:25:01.541 21:07:29 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:25:01.541 21:07:29 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:25:01.541 21:07:29 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:25:01.541 21:07:29 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:25:01.541 21:07:29 -- dd/sparse.sh@102 -- # stat2_b=24576 00:25:01.541 21:07:29 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:25:01.541 21:07:29 -- dd/sparse.sh@103 -- # stat3_b=24576 00:25:01.541 21:07:29 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:25:01.541 00:25:01.541 real 0m1.802s 00:25:01.541 user 0m1.433s 00:25:01.541 sys 0m0.255s 00:25:01.541 21:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.541 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.541 ************************************ 00:25:01.541 END TEST dd_sparse_bdev_to_file 00:25:01.541 ************************************ 00:25:01.541 21:07:29 -- dd/sparse.sh@1 -- # cleanup 00:25:01.541 21:07:29 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:25:01.541 21:07:29 -- dd/sparse.sh@12 -- # rm file_zero1 00:25:01.541 21:07:29 -- dd/sparse.sh@13 -- # rm file_zero2 00:25:01.541 21:07:29 -- dd/sparse.sh@14 -- # rm file_zero3 00:25:01.541 ************************************ 00:25:01.541 END TEST spdk_dd_sparse 00:25:01.541 ************************************ 00:25:01.541 00:25:01.541 real 0m5.833s 00:25:01.541 user 0m4.596s 00:25:01.541 sys 0m0.906s 00:25:01.541 21:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.541 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.541 21:07:29 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:25:01.541 21:07:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:01.541 21:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.541 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.541 ************************************ 00:25:01.541 START TEST spdk_dd_negative 00:25:01.541 ************************************ 00:25:01.541 21:07:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:25:01.541 * Looking for test storage... 00:25:01.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:01.541 21:07:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:01.541 21:07:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.541 21:07:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.541 21:07:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.541 21:07:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:01.541 21:07:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:01.541 21:07:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:01.541 21:07:29 -- paths/export.sh@5 -- # export PATH 00:25:01.541 21:07:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:01.541 21:07:29 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:01.541 21:07:29 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:01.541 21:07:29 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:01.542 21:07:29 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:01.542 21:07:29 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:25:01.542 21:07:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:01.542 21:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.542 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.542 ************************************ 00:25:01.542 START TEST dd_invalid_arguments 00:25:01.542 ************************************ 00:25:01.542 21:07:29 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:25:01.542 21:07:29 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:25:01.542 21:07:29 -- common/autotest_common.sh@640 -- # local es=0 00:25:01.542 21:07:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:25:01.542 21:07:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.542 21:07:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.542 21:07:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:01.542 21:07:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:25:01.542 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:25:01.542 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:25:01.542 options: 00:25:01.542 -c, --config JSON config file (default none) 00:25:01.542 --json JSON config file (default none) 00:25:01.542 --json-ignore-init-errors 00:25:01.542 don't exit on invalid config entry 00:25:01.542 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:25:01.542 -g, --single-file-segments 00:25:01.542 force creating just one hugetlbfs file 00:25:01.542 -h, --help show this usage 00:25:01.542 -i, --shm-id shared memory ID (optional) 00:25:01.542 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:25:01.542 --lcores lcore to CPU mapping list. The list is in the format: 00:25:01.542 [<,lcores[@CPUs]>...] 00:25:01.542 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:25:01.542 Within the group, '-' is used for range separator, 00:25:01.542 ',' is used for single number separator. 00:25:01.542 '( )' can be omitted for single element group, 00:25:01.542 '@' can be omitted if cpus and lcores have the same value 00:25:01.542 -n, --mem-channels channel number of memory channels used for DPDK 00:25:01.542 -p, --main-core main (primary) core for DPDK 00:25:01.542 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:25:01.542 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:25:01.542 --disable-cpumask-locks Disable CPU core lock files. 00:25:01.542 --silence-noticelog disable notice level logging to stderr 00:25:01.542 --msg-mempool-size global message memory pool size in count (default: 262143) 00:25:01.542 -u, --no-pci disable PCI access 00:25:01.542 --wait-for-rpc wait for RPCs to initialize subsystems 00:25:01.542 --max-delay maximum reactor delay (in microseconds) 00:25:01.542 -B, --pci-blocked pci addr to block (can be used more than once) 00:25:01.542 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:25:01.542 -R, --huge-unlink unlink huge files after initialization 00:25:01.542 -v, --version print SPDK version 00:25:01.542 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:25:01.542 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:25:01.542 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:25:01.542 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:25:01.542 Tracepoints vary in size and can use more than one trace entry. 00:25:01.542 --rpcs-allowed comma-separated list of permitted RPCS 00:25:01.542 --env-context Opaque context for use of the env implementation 00:25:01.542 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:25:01.542 --no-huge run without using hugepages 00:25:01.542 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:25:01.542 -e, --tpoint-group [:] 00:25:01.542 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:25:01.542 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:25:01.542 Groups and masks can be [2024-06-09 21:07:29.590098] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:25:01.542 combined (e.g. thread,bdev:0x1). 00:25:01.542 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:25:01.542 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:25:01.542 [--------- DD Options ---------] 00:25:01.542 --if Input file. Must specify either --if or --ib. 00:25:01.542 --ib Input bdev. Must specifier either --if or --ib 00:25:01.542 --of Output file. Must specify either --of or --ob. 00:25:01.542 --ob Output bdev. Must specify either --of or --ob. 00:25:01.542 --iflag Input file flags. 00:25:01.542 --oflag Output file flags. 00:25:01.542 --bs I/O unit size (default: 4096) 00:25:01.542 --qd Queue depth (default: 2) 00:25:01.542 --count I/O unit count. The number of I/O units to copy. (default: all) 00:25:01.542 --skip Skip this many I/O units at start of input. (default: 0) 00:25:01.542 --seek Skip this many I/O units at start of output. (default: 0) 00:25:01.542 --aio Force usage of AIO. (by default io_uring is used if available) 00:25:01.542 --sparse Enable hole skipping in input target 00:25:01.542 Available iflag and oflag values: 00:25:01.542 append - append mode 00:25:01.542 direct - use direct I/O for data 00:25:01.542 directory - fail unless a directory 00:25:01.542 dsync - use synchronized I/O for data 00:25:01.542 noatime - do not update access time 00:25:01.542 noctty - do not assign controlling terminal from file 00:25:01.542 nofollow - do not follow symlinks 00:25:01.542 nonblock - use non-blocking I/O 00:25:01.542 sync - use synchronized I/O for data and metadata 00:25:01.542 21:07:29 -- common/autotest_common.sh@643 -- # es=2 00:25:01.542 21:07:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:01.542 21:07:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:01.542 21:07:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:01.542 00:25:01.542 real 0m0.097s 00:25:01.542 user 0m0.041s 00:25:01.542 sys 0m0.056s 00:25:01.542 21:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.542 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.542 ************************************ 00:25:01.542 END TEST dd_invalid_arguments 00:25:01.542 ************************************ 00:25:01.542 21:07:29 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:25:01.542 21:07:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:01.542 21:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.542 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.542 ************************************ 00:25:01.542 START TEST dd_double_input 00:25:01.542 ************************************ 00:25:01.542 21:07:29 -- common/autotest_common.sh@1104 -- # double_input 00:25:01.542 21:07:29 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:25:01.542 21:07:29 -- common/autotest_common.sh@640 -- # local es=0 00:25:01.542 21:07:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:25:01.542 21:07:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.542 21:07:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.542 21:07:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.542 21:07:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:01.542 21:07:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:25:01.801 [2024-06-09 21:07:29.749311] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:25:01.801 21:07:29 -- common/autotest_common.sh@643 -- # es=22 00:25:01.801 21:07:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:01.801 21:07:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:01.801 21:07:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:01.801 00:25:01.801 real 0m0.109s 00:25:01.801 user 0m0.049s 00:25:01.801 sys 0m0.060s 00:25:01.801 21:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.801 ************************************ 00:25:01.801 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.801 END TEST dd_double_input 00:25:01.801 ************************************ 00:25:01.801 21:07:29 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:25:01.801 21:07:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:01.801 21:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:01.801 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.801 ************************************ 00:25:01.801 START TEST dd_double_output 00:25:01.801 ************************************ 00:25:01.801 21:07:29 -- common/autotest_common.sh@1104 -- # double_output 00:25:01.801 21:07:29 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:25:01.801 21:07:29 -- common/autotest_common.sh@640 -- # local es=0 00:25:01.801 21:07:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:25:01.801 21:07:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.801 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.801 21:07:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.801 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.801 21:07:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.801 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:01.801 21:07:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:01.801 21:07:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:01.801 21:07:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:25:01.801 [2024-06-09 21:07:29.903129] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:25:01.801 21:07:29 -- common/autotest_common.sh@643 -- # es=22 00:25:01.801 21:07:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:01.801 21:07:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:01.801 21:07:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:01.801 00:25:01.801 real 0m0.109s 00:25:01.801 user 0m0.050s 00:25:01.801 sys 0m0.059s 00:25:01.801 21:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.801 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:01.801 ************************************ 00:25:01.801 END TEST dd_double_output 00:25:01.801 ************************************ 00:25:02.060 21:07:29 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:25:02.060 21:07:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.060 21:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.060 21:07:29 -- common/autotest_common.sh@10 -- # set +x 00:25:02.060 ************************************ 00:25:02.060 START TEST dd_no_input 00:25:02.060 ************************************ 00:25:02.060 21:07:29 -- common/autotest_common.sh@1104 -- # no_input 00:25:02.060 21:07:29 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:25:02.060 21:07:29 -- common/autotest_common.sh@640 -- # local es=0 00:25:02.060 21:07:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:25:02.060 21:07:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.060 21:07:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.060 21:07:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.060 21:07:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:02.060 21:07:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:25:02.060 [2024-06-09 21:07:30.076417] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:25:02.060 21:07:30 -- common/autotest_common.sh@643 -- # es=22 00:25:02.060 21:07:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:02.060 21:07:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:02.060 21:07:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:02.060 00:25:02.060 real 0m0.135s 00:25:02.060 user 0m0.095s 00:25:02.060 sys 0m0.040s 00:25:02.060 21:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.060 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:02.060 ************************************ 00:25:02.060 END TEST dd_no_input 00:25:02.060 ************************************ 00:25:02.060 21:07:30 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:25:02.060 21:07:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.060 21:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.060 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:02.060 ************************************ 00:25:02.060 START TEST dd_no_output 00:25:02.060 ************************************ 00:25:02.060 21:07:30 -- common/autotest_common.sh@1104 -- # no_output 00:25:02.060 21:07:30 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:02.060 21:07:30 -- common/autotest_common.sh@640 -- # local es=0 00:25:02.060 21:07:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:02.060 21:07:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.060 21:07:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.060 21:07:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.060 21:07:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.060 21:07:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:02.060 21:07:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:02.319 [2024-06-09 21:07:30.244627] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:25:02.319 21:07:30 -- common/autotest_common.sh@643 -- # es=22 00:25:02.319 21:07:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:02.319 21:07:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:02.319 21:07:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:02.319 00:25:02.319 real 0m0.107s 00:25:02.319 user 0m0.047s 00:25:02.319 sys 0m0.059s 00:25:02.319 21:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.319 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:02.319 ************************************ 00:25:02.319 END TEST dd_no_output 00:25:02.319 ************************************ 00:25:02.319 21:07:30 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:25:02.319 21:07:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.319 21:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.319 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:02.319 ************************************ 00:25:02.319 START TEST dd_wrong_blocksize 00:25:02.319 ************************************ 00:25:02.319 21:07:30 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:25:02.319 21:07:30 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:25:02.319 21:07:30 -- common/autotest_common.sh@640 -- # local es=0 00:25:02.319 21:07:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:25:02.319 21:07:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.319 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.319 21:07:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.319 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.319 21:07:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.319 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.319 21:07:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.319 21:07:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:02.319 21:07:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:25:02.319 [2024-06-09 21:07:30.396847] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:25:02.319 21:07:30 -- common/autotest_common.sh@643 -- # es=22 00:25:02.319 21:07:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:02.319 21:07:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:02.319 21:07:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:02.319 00:25:02.319 real 0m0.103s 00:25:02.319 user 0m0.052s 00:25:02.319 sys 0m0.052s 00:25:02.319 21:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.319 ************************************ 00:25:02.319 END TEST dd_wrong_blocksize 00:25:02.319 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:02.319 ************************************ 00:25:02.319 21:07:30 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:25:02.319 21:07:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.319 21:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.319 21:07:30 -- common/autotest_common.sh@10 -- # set +x 00:25:02.319 ************************************ 00:25:02.319 START TEST dd_smaller_blocksize 00:25:02.319 ************************************ 00:25:02.319 21:07:30 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:25:02.319 21:07:30 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:25:02.319 21:07:30 -- common/autotest_common.sh@640 -- # local es=0 00:25:02.319 21:07:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:25:02.319 21:07:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.319 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.319 21:07:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.578 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.578 21:07:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.578 21:07:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:02.578 21:07:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:02.578 21:07:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:02.578 21:07:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:25:02.578 [2024-06-09 21:07:30.561464] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:02.578 [2024-06-09 21:07:30.561716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130235 ] 00:25:02.578 [2024-06-09 21:07:30.733472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.837 [2024-06-09 21:07:30.959220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.404 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:25:03.404 [2024-06-09 21:07:31.559075] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:25:03.404 [2024-06-09 21:07:31.559192] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:04.002 [2024-06-09 21:07:32.154570] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:04.569 21:07:32 -- common/autotest_common.sh@643 -- # es=244 00:25:04.569 21:07:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:04.569 21:07:32 -- common/autotest_common.sh@652 -- # es=116 00:25:04.569 21:07:32 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:04.569 21:07:32 -- common/autotest_common.sh@660 -- # es=1 00:25:04.569 21:07:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:04.569 00:25:04.569 real 0m2.009s 00:25:04.569 user 0m1.437s 00:25:04.569 sys 0m0.466s 00:25:04.569 21:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:04.569 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.569 ************************************ 00:25:04.569 END TEST dd_smaller_blocksize 00:25:04.569 ************************************ 00:25:04.569 21:07:32 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:25:04.569 21:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:04.569 21:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:04.569 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.569 ************************************ 00:25:04.569 START TEST dd_invalid_count 00:25:04.569 ************************************ 00:25:04.569 21:07:32 -- common/autotest_common.sh@1104 -- # invalid_count 00:25:04.569 21:07:32 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:25:04.569 21:07:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:04.569 21:07:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:25:04.569 21:07:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.569 21:07:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.569 21:07:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:04.569 21:07:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:25:04.569 [2024-06-09 21:07:32.609403] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:25:04.569 21:07:32 -- common/autotest_common.sh@643 -- # es=22 00:25:04.569 21:07:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:04.569 21:07:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:04.569 21:07:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:04.569 00:25:04.569 real 0m0.104s 00:25:04.569 user 0m0.073s 00:25:04.569 sys 0m0.031s 00:25:04.569 21:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:04.569 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.569 ************************************ 00:25:04.569 END TEST dd_invalid_count 00:25:04.569 ************************************ 00:25:04.569 21:07:32 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:25:04.569 21:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:04.569 21:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:04.569 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.569 ************************************ 00:25:04.569 START TEST dd_invalid_oflag 00:25:04.569 ************************************ 00:25:04.569 21:07:32 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:25:04.569 21:07:32 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:25:04.569 21:07:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:04.569 21:07:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:25:04.569 21:07:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.569 21:07:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.569 21:07:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.569 21:07:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:04.569 21:07:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:25:04.828 [2024-06-09 21:07:32.763102] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:25:04.828 21:07:32 -- common/autotest_common.sh@643 -- # es=22 00:25:04.828 21:07:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:04.828 21:07:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:04.828 21:07:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:04.828 00:25:04.828 real 0m0.106s 00:25:04.828 user 0m0.052s 00:25:04.828 sys 0m0.054s 00:25:04.828 21:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:04.828 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 ************************************ 00:25:04.828 END TEST dd_invalid_oflag 00:25:04.828 ************************************ 00:25:04.828 21:07:32 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:25:04.828 21:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:04.828 21:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:04.828 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 ************************************ 00:25:04.828 START TEST dd_invalid_iflag 00:25:04.828 ************************************ 00:25:04.828 21:07:32 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:25:04.828 21:07:32 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:25:04.828 21:07:32 -- common/autotest_common.sh@640 -- # local es=0 00:25:04.828 21:07:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:25:04.828 21:07:32 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.828 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.828 21:07:32 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.828 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.828 21:07:32 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.828 21:07:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:04.828 21:07:32 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:04.828 21:07:32 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:04.828 21:07:32 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:25:04.828 [2024-06-09 21:07:32.912464] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:25:04.828 21:07:32 -- common/autotest_common.sh@643 -- # es=22 00:25:04.828 21:07:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:04.828 21:07:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:04.828 21:07:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:04.828 00:25:04.828 real 0m0.105s 00:25:04.828 user 0m0.070s 00:25:04.828 sys 0m0.035s 00:25:04.828 21:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:04.828 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 ************************************ 00:25:04.828 END TEST dd_invalid_iflag 00:25:04.828 ************************************ 00:25:04.828 21:07:32 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:25:04.828 21:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:04.828 21:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:04.828 21:07:32 -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 ************************************ 00:25:04.828 START TEST dd_unknown_flag 00:25:04.828 ************************************ 00:25:04.828 21:07:33 -- common/autotest_common.sh@1104 -- # unknown_flag 00:25:04.828 21:07:33 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:25:04.828 21:07:33 -- common/autotest_common.sh@640 -- # local es=0 00:25:04.828 21:07:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:25:04.828 21:07:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.087 21:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.087 21:07:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.087 21:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.087 21:07:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.087 21:07:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.087 21:07:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.087 21:07:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:05.087 21:07:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:25:05.087 [2024-06-09 21:07:33.063750] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:05.087 [2024-06-09 21:07:33.063943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130360 ] 00:25:05.087 [2024-06-09 21:07:33.226693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.344 [2024-06-09 21:07:33.408389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.602 [2024-06-09 21:07:33.674856] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:25:05.602 [2024-06-09 21:07:33.674999] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:25:05.602 [2024-06-09 21:07:33.675027] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:25:05.602 [2024-06-09 21:07:33.675089] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:06.168 [2024-06-09 21:07:34.254163] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:06.425 21:07:34 -- common/autotest_common.sh@643 -- # es=236 00:25:06.425 21:07:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:06.425 21:07:34 -- common/autotest_common.sh@652 -- # es=108 00:25:06.425 21:07:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:06.426 21:07:34 -- common/autotest_common.sh@660 -- # es=1 00:25:06.426 21:07:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:06.426 00:25:06.426 real 0m1.586s 00:25:06.426 user 0m1.273s 00:25:06.426 sys 0m0.212s 00:25:06.426 21:07:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.426 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:25:06.426 ************************************ 00:25:06.426 END TEST dd_unknown_flag 00:25:06.426 ************************************ 00:25:06.683 21:07:34 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:25:06.683 21:07:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:06.683 21:07:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.683 21:07:34 -- common/autotest_common.sh@10 -- # set +x 00:25:06.683 ************************************ 00:25:06.683 START TEST dd_invalid_json 00:25:06.683 ************************************ 00:25:06.683 21:07:34 -- common/autotest_common.sh@1104 -- # invalid_json 00:25:06.683 21:07:34 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:25:06.683 21:07:34 -- common/autotest_common.sh@640 -- # local es=0 00:25:06.683 21:07:34 -- dd/negative_dd.sh@95 -- # : 00:25:06.683 21:07:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:25:06.683 21:07:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:06.683 21:07:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:06.683 21:07:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:06.683 21:07:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:06.683 21:07:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:06.683 21:07:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:06.683 21:07:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:06.683 21:07:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:06.683 21:07:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:25:06.683 [2024-06-09 21:07:34.698908] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:06.683 [2024-06-09 21:07:34.699114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130408 ] 00:25:06.683 [2024-06-09 21:07:34.854639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.940 [2024-06-09 21:07:35.030694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.940 [2024-06-09 21:07:35.030948] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:25:06.940 [2024-06-09 21:07:35.030997] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:06.940 [2024-06-09 21:07:35.031119] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:07.199 21:07:35 -- common/autotest_common.sh@643 -- # es=234 00:25:07.199 21:07:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:07.199 21:07:35 -- common/autotest_common.sh@652 -- # es=106 00:25:07.199 21:07:35 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:07.199 21:07:35 -- common/autotest_common.sh@660 -- # es=1 00:25:07.199 21:07:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:07.199 00:25:07.199 real 0m0.716s 00:25:07.199 user 0m0.507s 00:25:07.199 sys 0m0.111s 00:25:07.199 21:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.199 ************************************ 00:25:07.199 END TEST dd_invalid_json 00:25:07.199 ************************************ 00:25:07.199 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:25:07.457 00:25:07.457 real 0m5.954s 00:25:07.457 user 0m4.099s 00:25:07.457 sys 0m1.531s 00:25:07.457 21:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.457 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:25:07.457 ************************************ 00:25:07.457 END TEST spdk_dd_negative 00:25:07.457 ************************************ 00:25:07.457 00:25:07.457 real 2m22.228s 00:25:07.457 user 1m51.805s 00:25:07.457 sys 0m20.492s 00:25:07.457 21:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.457 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:25:07.457 ************************************ 00:25:07.457 END TEST spdk_dd 00:25:07.457 ************************************ 00:25:07.457 21:07:35 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:25:07.457 21:07:35 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:25:07.457 21:07:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:07.457 21:07:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:07.457 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:25:07.457 ************************************ 00:25:07.457 START TEST blockdev_nvme 00:25:07.457 ************************************ 00:25:07.457 21:07:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:25:07.457 * Looking for test storage... 00:25:07.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:07.457 21:07:35 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:07.457 21:07:35 -- bdev/nbd_common.sh@6 -- # set -e 00:25:07.457 21:07:35 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:07.457 21:07:35 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:07.457 21:07:35 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:07.457 21:07:35 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:07.457 21:07:35 -- bdev/blockdev.sh@18 -- # : 00:25:07.457 21:07:35 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:25:07.457 21:07:35 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:25:07.457 21:07:35 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:25:07.457 21:07:35 -- bdev/blockdev.sh@672 -- # uname -s 00:25:07.457 21:07:35 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:25:07.457 21:07:35 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:25:07.457 21:07:35 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:25:07.457 21:07:35 -- bdev/blockdev.sh@681 -- # crypto_device= 00:25:07.457 21:07:35 -- bdev/blockdev.sh@682 -- # dek= 00:25:07.457 21:07:35 -- bdev/blockdev.sh@683 -- # env_ctx= 00:25:07.457 21:07:35 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:25:07.457 21:07:35 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:25:07.457 21:07:35 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:25:07.457 21:07:35 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:25:07.457 21:07:35 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:25:07.457 21:07:35 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=130504 00:25:07.457 21:07:35 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:07.457 21:07:35 -- bdev/blockdev.sh@47 -- # waitforlisten 130504 00:25:07.457 21:07:35 -- common/autotest_common.sh@819 -- # '[' -z 130504 ']' 00:25:07.457 21:07:35 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:07.457 21:07:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.457 21:07:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.457 21:07:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.457 21:07:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.457 21:07:35 -- common/autotest_common.sh@10 -- # set +x 00:25:07.457 [2024-06-09 21:07:35.630133] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:07.457 [2024-06-09 21:07:35.630325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130504 ] 00:25:07.715 [2024-06-09 21:07:35.800265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.973 [2024-06-09 21:07:35.967950] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:07.973 [2024-06-09 21:07:35.968222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.349 21:07:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:09.349 21:07:37 -- common/autotest_common.sh@852 -- # return 0 00:25:09.349 21:07:37 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:25:09.349 21:07:37 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:25:09.349 21:07:37 -- bdev/blockdev.sh@79 -- # local json 00:25:09.349 21:07:37 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:25:09.349 21:07:37 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:09.349 21:07:37 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:25:09.349 21:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.349 21:07:37 -- common/autotest_common.sh@10 -- # set +x 00:25:09.349 21:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.349 21:07:37 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:25:09.349 21:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.349 21:07:37 -- common/autotest_common.sh@10 -- # set +x 00:25:09.349 21:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.349 21:07:37 -- bdev/blockdev.sh@738 -- # cat 00:25:09.349 21:07:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:25:09.349 21:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.349 21:07:37 -- common/autotest_common.sh@10 -- # set +x 00:25:09.349 21:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.349 21:07:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:25:09.349 21:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.349 21:07:37 -- common/autotest_common.sh@10 -- # set +x 00:25:09.349 21:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.349 21:07:37 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:09.349 21:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.349 21:07:37 -- common/autotest_common.sh@10 -- # set +x 00:25:09.349 21:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.349 21:07:37 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:25:09.349 21:07:37 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:25:09.349 21:07:37 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:25:09.349 21:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.349 21:07:37 -- common/autotest_common.sh@10 -- # set +x 00:25:09.349 21:07:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.349 21:07:37 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:25:09.349 21:07:37 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4c861aed-5fd5-4eaf-a9a1-bddfebcaa063"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4c861aed-5fd5-4eaf-a9a1-bddfebcaa063",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:25:09.349 21:07:37 -- bdev/blockdev.sh@747 -- # jq -r .name 00:25:09.349 21:07:37 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:25:09.349 21:07:37 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:25:09.349 21:07:37 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:25:09.349 21:07:37 -- bdev/blockdev.sh@752 -- # killprocess 130504 00:25:09.349 21:07:37 -- common/autotest_common.sh@926 -- # '[' -z 130504 ']' 00:25:09.349 21:07:37 -- common/autotest_common.sh@930 -- # kill -0 130504 00:25:09.349 21:07:37 -- common/autotest_common.sh@931 -- # uname 00:25:09.349 21:07:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:09.349 21:07:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130504 00:25:09.349 21:07:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:09.349 21:07:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:09.349 killing process with pid 130504 00:25:09.349 21:07:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130504' 00:25:09.349 21:07:37 -- common/autotest_common.sh@945 -- # kill 130504 00:25:09.349 21:07:37 -- common/autotest_common.sh@950 -- # wait 130504 00:25:11.250 21:07:39 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:11.250 21:07:39 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:11.250 21:07:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:25:11.250 21:07:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:11.250 21:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:11.250 ************************************ 00:25:11.250 START TEST bdev_hello_world 00:25:11.250 ************************************ 00:25:11.250 21:07:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:11.250 [2024-06-09 21:07:39.405068] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:11.250 [2024-06-09 21:07:39.405274] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130588 ] 00:25:11.509 [2024-06-09 21:07:39.571453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.768 [2024-06-09 21:07:39.758723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.026 [2024-06-09 21:07:40.145354] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:12.026 [2024-06-09 21:07:40.145451] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:25:12.026 [2024-06-09 21:07:40.145504] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:12.026 [2024-06-09 21:07:40.148215] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:12.026 [2024-06-09 21:07:40.148812] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:12.026 [2024-06-09 21:07:40.148888] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:12.026 [2024-06-09 21:07:40.149230] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:12.026 00:25:12.026 [2024-06-09 21:07:40.149275] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:12.962 00:25:12.962 real 0m1.758s 00:25:12.963 user 0m1.433s 00:25:12.963 sys 0m0.225s 00:25:12.963 21:07:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.963 ************************************ 00:25:12.963 END TEST bdev_hello_world 00:25:12.963 ************************************ 00:25:12.963 21:07:41 -- common/autotest_common.sh@10 -- # set +x 00:25:12.963 21:07:41 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:25:12.963 21:07:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:12.963 21:07:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.963 21:07:41 -- common/autotest_common.sh@10 -- # set +x 00:25:13.221 ************************************ 00:25:13.221 START TEST bdev_bounds 00:25:13.221 ************************************ 00:25:13.221 21:07:41 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:25:13.221 21:07:41 -- bdev/blockdev.sh@288 -- # bdevio_pid=130639 00:25:13.221 21:07:41 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:13.221 Process bdevio pid: 130639 00:25:13.221 21:07:41 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 130639' 00:25:13.221 21:07:41 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:13.221 21:07:41 -- bdev/blockdev.sh@291 -- # waitforlisten 130639 00:25:13.221 21:07:41 -- common/autotest_common.sh@819 -- # '[' -z 130639 ']' 00:25:13.221 21:07:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.221 21:07:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:13.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.221 21:07:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.221 21:07:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:13.221 21:07:41 -- common/autotest_common.sh@10 -- # set +x 00:25:13.221 [2024-06-09 21:07:41.203891] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:13.221 [2024-06-09 21:07:41.204070] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130639 ] 00:25:13.221 [2024-06-09 21:07:41.368033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:13.480 [2024-06-09 21:07:41.551567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.480 [2024-06-09 21:07:41.551734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.480 [2024-06-09 21:07:41.551726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.047 21:07:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:14.047 21:07:42 -- common/autotest_common.sh@852 -- # return 0 00:25:14.047 21:07:42 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:14.305 I/O targets: 00:25:14.305 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:25:14.305 00:25:14.305 00:25:14.305 CUnit - A unit testing framework for C - Version 2.1-3 00:25:14.305 http://cunit.sourceforge.net/ 00:25:14.305 00:25:14.305 00:25:14.305 Suite: bdevio tests on: Nvme0n1 00:25:14.305 Test: blockdev write read block ...passed 00:25:14.305 Test: blockdev write zeroes read block ...passed 00:25:14.305 Test: blockdev write zeroes read no split ...passed 00:25:14.305 Test: blockdev write zeroes read split ...passed 00:25:14.305 Test: blockdev write zeroes read split partial ...passed 00:25:14.305 Test: blockdev reset ...[2024-06-09 21:07:42.310432] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:14.305 [2024-06-09 21:07:42.313722] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:14.305 passed 00:25:14.305 Test: blockdev write read 8 blocks ...passed 00:25:14.305 Test: blockdev write read size > 128k ...passed 00:25:14.305 Test: blockdev write read invalid size ...passed 00:25:14.305 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:14.305 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:14.305 Test: blockdev write read max offset ...passed 00:25:14.305 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:14.305 Test: blockdev writev readv 8 blocks ...passed 00:25:14.305 Test: blockdev writev readv 30 x 1block ...passed 00:25:14.305 Test: blockdev writev readv block ...passed 00:25:14.305 Test: blockdev writev readv size > 128k ...passed 00:25:14.305 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:14.305 Test: blockdev comparev and writev ...[2024-06-09 21:07:42.322004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x30c0d000 len:0x1000 00:25:14.305 [2024-06-09 21:07:42.322123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:14.305 passed 00:25:14.305 Test: blockdev nvme passthru rw ...passed 00:25:14.305 Test: blockdev nvme passthru vendor specific ...[2024-06-09 21:07:42.322975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:14.305 [2024-06-09 21:07:42.323041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:14.305 passed 00:25:14.305 Test: blockdev nvme admin passthru ...passed 00:25:14.305 Test: blockdev copy ...passed 00:25:14.305 00:25:14.305 Run Summary: Type Total Ran Passed Failed Inactive 00:25:14.305 suites 1 1 n/a 0 0 00:25:14.305 tests 23 23 23 0 0 00:25:14.305 asserts 152 152 152 0 n/a 00:25:14.305 00:25:14.305 Elapsed time = 0.187 seconds 00:25:14.305 0 00:25:14.305 21:07:42 -- bdev/blockdev.sh@293 -- # killprocess 130639 00:25:14.305 21:07:42 -- common/autotest_common.sh@926 -- # '[' -z 130639 ']' 00:25:14.305 21:07:42 -- common/autotest_common.sh@930 -- # kill -0 130639 00:25:14.305 21:07:42 -- common/autotest_common.sh@931 -- # uname 00:25:14.305 21:07:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:14.305 21:07:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130639 00:25:14.305 21:07:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:14.305 killing process with pid 130639 00:25:14.305 21:07:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:14.305 21:07:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130639' 00:25:14.306 21:07:42 -- common/autotest_common.sh@945 -- # kill 130639 00:25:14.306 21:07:42 -- common/autotest_common.sh@950 -- # wait 130639 00:25:15.255 21:07:43 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:25:15.255 00:25:15.255 real 0m2.189s 00:25:15.255 user 0m5.244s 00:25:15.255 sys 0m0.349s 00:25:15.255 21:07:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.255 21:07:43 -- common/autotest_common.sh@10 -- # set +x 00:25:15.255 ************************************ 00:25:15.255 END TEST bdev_bounds 00:25:15.255 ************************************ 00:25:15.255 21:07:43 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:25:15.255 21:07:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:15.255 21:07:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:15.255 21:07:43 -- common/autotest_common.sh@10 -- # set +x 00:25:15.255 ************************************ 00:25:15.255 START TEST bdev_nbd 00:25:15.255 ************************************ 00:25:15.255 21:07:43 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:25:15.255 21:07:43 -- bdev/blockdev.sh@298 -- # uname -s 00:25:15.255 21:07:43 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:25:15.255 21:07:43 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:15.255 21:07:43 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:15.255 21:07:43 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:25:15.255 21:07:43 -- bdev/blockdev.sh@302 -- # local bdev_all 00:25:15.255 21:07:43 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:25:15.255 21:07:43 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:25:15.255 21:07:43 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:15.255 21:07:43 -- bdev/blockdev.sh@309 -- # local nbd_all 00:25:15.255 21:07:43 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:25:15.255 21:07:43 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:25:15.255 21:07:43 -- bdev/blockdev.sh@312 -- # local nbd_list 00:25:15.255 21:07:43 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:25:15.255 21:07:43 -- bdev/blockdev.sh@313 -- # local bdev_list 00:25:15.255 21:07:43 -- bdev/blockdev.sh@316 -- # nbd_pid=130695 00:25:15.255 21:07:43 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:15.255 21:07:43 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:15.255 21:07:43 -- bdev/blockdev.sh@318 -- # waitforlisten 130695 /var/tmp/spdk-nbd.sock 00:25:15.255 21:07:43 -- common/autotest_common.sh@819 -- # '[' -z 130695 ']' 00:25:15.255 21:07:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:15.255 21:07:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:15.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:15.255 21:07:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:15.255 21:07:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:15.255 21:07:43 -- common/autotest_common.sh@10 -- # set +x 00:25:15.514 [2024-06-09 21:07:43.452004] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:15.514 [2024-06-09 21:07:43.452218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.514 [2024-06-09 21:07:43.611383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.773 [2024-06-09 21:07:43.834717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.341 21:07:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:16.341 21:07:44 -- common/autotest_common.sh@852 -- # return 0 00:25:16.341 21:07:44 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@24 -- # local i 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:16.341 21:07:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:25:16.600 21:07:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:16.600 21:07:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:16.600 21:07:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:16.600 21:07:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:16.600 21:07:44 -- common/autotest_common.sh@857 -- # local i 00:25:16.600 21:07:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:16.600 21:07:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:16.600 21:07:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:16.600 21:07:44 -- common/autotest_common.sh@861 -- # break 00:25:16.600 21:07:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:16.600 21:07:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:16.600 21:07:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:16.600 1+0 records in 00:25:16.600 1+0 records out 00:25:16.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054899 s, 7.5 MB/s 00:25:16.600 21:07:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.600 21:07:44 -- common/autotest_common.sh@874 -- # size=4096 00:25:16.600 21:07:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.600 21:07:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:16.600 21:07:44 -- common/autotest_common.sh@877 -- # return 0 00:25:16.600 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:16.600 21:07:44 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:16.600 21:07:44 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:16.860 { 00:25:16.860 "nbd_device": "/dev/nbd0", 00:25:16.860 "bdev_name": "Nvme0n1" 00:25:16.860 } 00:25:16.860 ]' 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:16.860 { 00:25:16.860 "nbd_device": "/dev/nbd0", 00:25:16.860 "bdev_name": "Nvme0n1" 00:25:16.860 } 00:25:16.860 ]' 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@51 -- # local i 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:16.860 21:07:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@41 -- # break 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@45 -- # return 0 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:17.118 21:07:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@65 -- # true 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@65 -- # count=0 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@122 -- # count=0 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@127 -- # return 0 00:25:17.378 21:07:45 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@12 -- # local i 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:17.378 21:07:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:25:17.636 /dev/nbd0 00:25:17.636 21:07:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:17.636 21:07:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:17.636 21:07:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:17.636 21:07:45 -- common/autotest_common.sh@857 -- # local i 00:25:17.636 21:07:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:17.636 21:07:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:17.636 21:07:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:17.636 21:07:45 -- common/autotest_common.sh@861 -- # break 00:25:17.636 21:07:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:17.636 21:07:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:17.637 21:07:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:17.637 1+0 records in 00:25:17.637 1+0 records out 00:25:17.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503184 s, 8.1 MB/s 00:25:17.637 21:07:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.637 21:07:45 -- common/autotest_common.sh@874 -- # size=4096 00:25:17.637 21:07:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.637 21:07:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:17.637 21:07:45 -- common/autotest_common.sh@877 -- # return 0 00:25:17.637 21:07:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:17.637 21:07:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:17.637 21:07:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:17.637 21:07:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.637 21:07:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:17.896 { 00:25:17.896 "nbd_device": "/dev/nbd0", 00:25:17.896 "bdev_name": "Nvme0n1" 00:25:17.896 } 00:25:17.896 ]' 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:17.896 { 00:25:17.896 "nbd_device": "/dev/nbd0", 00:25:17.896 "bdev_name": "Nvme0n1" 00:25:17.896 } 00:25:17.896 ]' 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@65 -- # count=1 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@66 -- # echo 1 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@95 -- # count=1 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:17.896 256+0 records in 00:25:17.896 256+0 records out 00:25:17.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00816426 s, 128 MB/s 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:17.896 21:07:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:17.896 256+0 records in 00:25:17.896 256+0 records out 00:25:17.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0735022 s, 14.3 MB/s 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:17.896 21:07:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@51 -- # local i 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:18.155 21:07:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@41 -- # break 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@45 -- # return 0 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:18.413 21:07:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@65 -- # true 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@65 -- # count=0 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@104 -- # count=0 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@109 -- # return 0 00:25:18.672 21:07:46 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:25:18.672 21:07:46 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:18.931 malloc_lvol_verify 00:25:18.931 21:07:46 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:19.189 2a35b03f-19ae-4850-9896-336eb969e1c6 00:25:19.189 21:07:47 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:19.189 25a213da-ec2c-4cf7-8718-e1d5fae4f0bf 00:25:19.189 21:07:47 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:19.448 /dev/nbd0 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:25:19.448 mke2fs 1.46.5 (30-Dec-2021) 00:25:19.448 00:25:19.448 Filesystem too small for a journal 00:25:19.448 Discarding device blocks: 0/1024 done 00:25:19.448 Creating filesystem with 1024 4k blocks and 1024 inodes 00:25:19.448 00:25:19.448 Allocating group tables: 0/1 done 00:25:19.448 Writing inode tables: 0/1 done 00:25:19.448 Writing superblocks and filesystem accounting information: 0/1 done 00:25:19.448 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@51 -- # local i 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:19.448 21:07:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@41 -- # break 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@45 -- # return 0 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:25:19.707 21:07:47 -- bdev/nbd_common.sh@147 -- # return 0 00:25:19.707 21:07:47 -- bdev/blockdev.sh@324 -- # killprocess 130695 00:25:19.707 21:07:47 -- common/autotest_common.sh@926 -- # '[' -z 130695 ']' 00:25:19.707 21:07:47 -- common/autotest_common.sh@930 -- # kill -0 130695 00:25:19.707 21:07:47 -- common/autotest_common.sh@931 -- # uname 00:25:19.707 21:07:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:19.707 21:07:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130695 00:25:19.707 21:07:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:19.707 killing process with pid 130695 00:25:19.707 21:07:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:19.707 21:07:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130695' 00:25:19.707 21:07:47 -- common/autotest_common.sh@945 -- # kill 130695 00:25:19.707 21:07:47 -- common/autotest_common.sh@950 -- # wait 130695 00:25:20.643 21:07:48 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:25:20.643 00:25:20.643 real 0m5.404s 00:25:20.643 user 0m7.903s 00:25:20.643 sys 0m1.011s 00:25:20.643 21:07:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.643 21:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.643 ************************************ 00:25:20.643 END TEST bdev_nbd 00:25:20.644 ************************************ 00:25:20.902 21:07:48 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:25:20.902 21:07:48 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:25:20.902 skipping fio tests on NVMe due to multi-ns failures. 00:25:20.902 21:07:48 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:25:20.902 21:07:48 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:20.902 21:07:48 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:20.902 21:07:48 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:25:20.902 21:07:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:20.902 21:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.902 ************************************ 00:25:20.902 START TEST bdev_verify 00:25:20.902 ************************************ 00:25:20.902 21:07:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:20.902 [2024-06-09 21:07:48.912448] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:20.902 [2024-06-09 21:07:48.912683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130896 ] 00:25:21.161 [2024-06-09 21:07:49.087342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:21.161 [2024-06-09 21:07:49.249617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.161 [2024-06-09 21:07:49.249619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.728 Running I/O for 5 seconds... 00:25:26.995 00:25:26.995 Latency(us) 00:25:26.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.995 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:26.995 Verification LBA range: start 0x0 length 0xa0000 00:25:26.995 Nvme0n1 : 5.01 17309.66 67.62 0.00 0.00 7362.98 467.32 15966.95 00:25:26.995 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:26.995 Verification LBA range: start 0xa0000 length 0xa0000 00:25:26.995 Nvme0n1 : 5.01 17306.22 67.60 0.00 0.00 7362.89 415.19 18350.08 00:25:26.995 =================================================================================================================== 00:25:26.995 Total : 34615.87 135.22 0.00 0.00 7362.93 415.19 18350.08 00:25:35.152 00:25:35.152 real 0m13.537s 00:25:35.152 user 0m25.925s 00:25:35.152 sys 0m0.296s 00:25:35.152 21:08:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.152 ************************************ 00:25:35.152 END TEST bdev_verify 00:25:35.152 ************************************ 00:25:35.152 21:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:35.152 21:08:02 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:35.152 21:08:02 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:25:35.152 21:08:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:35.152 21:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:35.152 ************************************ 00:25:35.152 START TEST bdev_verify_big_io 00:25:35.152 ************************************ 00:25:35.152 21:08:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:35.152 [2024-06-09 21:08:02.496667] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:35.152 [2024-06-09 21:08:02.496861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131072 ] 00:25:35.152 [2024-06-09 21:08:02.666885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:35.152 [2024-06-09 21:08:02.846986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.152 [2024-06-09 21:08:02.846991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.152 Running I/O for 5 seconds... 00:25:40.421 00:25:40.421 Latency(us) 00:25:40.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.421 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:40.421 Verification LBA range: start 0x0 length 0xa000 00:25:40.421 Nvme0n1 : 5.03 1980.98 123.81 0.00 0.00 63753.08 565.99 98184.84 00:25:40.421 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:40.421 Verification LBA range: start 0xa000 length 0xa000 00:25:40.421 Nvme0n1 : 5.04 1904.91 119.06 0.00 0.00 66325.97 536.20 101044.60 00:25:40.422 =================================================================================================================== 00:25:40.422 Total : 3885.89 242.87 0.00 0.00 65015.05 536.20 101044.60 00:25:41.798 00:25:41.798 real 0m7.197s 00:25:41.798 user 0m13.314s 00:25:41.798 sys 0m0.238s 00:25:41.798 21:08:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.798 21:08:09 -- common/autotest_common.sh@10 -- # set +x 00:25:41.798 ************************************ 00:25:41.798 END TEST bdev_verify_big_io 00:25:41.798 ************************************ 00:25:41.798 21:08:09 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:41.798 21:08:09 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:25:41.798 21:08:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:41.798 21:08:09 -- common/autotest_common.sh@10 -- # set +x 00:25:41.798 ************************************ 00:25:41.798 START TEST bdev_write_zeroes 00:25:41.798 ************************************ 00:25:41.798 21:08:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:41.798 [2024-06-09 21:08:09.737267] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:41.798 [2024-06-09 21:08:09.737433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131174 ] 00:25:41.798 [2024-06-09 21:08:09.889012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.056 [2024-06-09 21:08:10.076420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.314 Running I/O for 1 seconds... 00:25:43.714 00:25:43.714 Latency(us) 00:25:43.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.714 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:43.714 Nvme0n1 : 1.00 63654.76 248.65 0.00 0.00 2005.60 618.12 12809.31 00:25:43.714 =================================================================================================================== 00:25:43.714 Total : 63654.76 248.65 0.00 0.00 2005.60 618.12 12809.31 00:25:44.280 00:25:44.280 real 0m2.685s 00:25:44.280 user 0m2.364s 00:25:44.280 sys 0m0.221s 00:25:44.280 21:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.280 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:25:44.280 ************************************ 00:25:44.280 END TEST bdev_write_zeroes 00:25:44.280 ************************************ 00:25:44.280 21:08:12 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:44.280 21:08:12 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:25:44.280 21:08:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:44.280 21:08:12 -- common/autotest_common.sh@10 -- # set +x 00:25:44.280 ************************************ 00:25:44.280 START TEST bdev_json_nonenclosed 00:25:44.280 ************************************ 00:25:44.280 21:08:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:44.537 [2024-06-09 21:08:12.479173] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:44.537 [2024-06-09 21:08:12.479525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131232 ] 00:25:44.537 [2024-06-09 21:08:12.648976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.795 [2024-06-09 21:08:12.805988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.795 [2024-06-09 21:08:12.806196] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:44.795 [2024-06-09 21:08:12.806256] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:45.053 00:25:45.053 real 0m0.695s 00:25:45.053 user 0m0.456s 00:25:45.053 sys 0m0.139s 00:25:45.053 21:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.053 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:45.053 ************************************ 00:25:45.053 END TEST bdev_json_nonenclosed 00:25:45.053 ************************************ 00:25:45.053 21:08:13 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:45.053 21:08:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:25:45.053 21:08:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:45.053 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:45.053 ************************************ 00:25:45.053 START TEST bdev_json_nonarray 00:25:45.053 ************************************ 00:25:45.053 21:08:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:45.053 [2024-06-09 21:08:13.213142] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:45.053 [2024-06-09 21:08:13.213299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131260 ] 00:25:45.311 [2024-06-09 21:08:13.369027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.570 [2024-06-09 21:08:13.526975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.570 [2024-06-09 21:08:13.527196] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:45.570 [2024-06-09 21:08:13.527254] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:45.829 00:25:45.829 real 0m0.686s 00:25:45.829 user 0m0.482s 00:25:45.829 sys 0m0.103s 00:25:45.829 21:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.829 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:45.829 ************************************ 00:25:45.829 END TEST bdev_json_nonarray 00:25:45.829 ************************************ 00:25:45.829 21:08:13 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:25:45.829 21:08:13 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:25:45.829 21:08:13 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:25:45.829 21:08:13 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:25:45.829 21:08:13 -- bdev/blockdev.sh@809 -- # cleanup 00:25:45.829 21:08:13 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:45.829 21:08:13 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:45.829 21:08:13 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:25:45.829 21:08:13 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:25:45.829 21:08:13 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:25:45.829 21:08:13 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:25:45.829 ************************************ 00:25:45.829 END TEST blockdev_nvme 00:25:45.829 ************************************ 00:25:45.829 00:25:45.829 real 0m38.409s 00:25:45.829 user 1m1.430s 00:25:45.829 sys 0m3.265s 00:25:45.829 21:08:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.829 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:45.829 21:08:13 -- spdk/autotest.sh@219 -- # uname -s 00:25:45.829 21:08:13 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:25:45.829 21:08:13 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:25:45.829 21:08:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:45.829 21:08:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:45.829 21:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:45.829 ************************************ 00:25:45.829 START TEST blockdev_nvme_gpt 00:25:45.829 ************************************ 00:25:45.829 21:08:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:25:45.829 * Looking for test storage... 00:25:46.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:46.087 21:08:14 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:46.087 21:08:14 -- bdev/nbd_common.sh@6 -- # set -e 00:25:46.087 21:08:14 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:46.087 21:08:14 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:46.087 21:08:14 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:46.087 21:08:14 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:46.087 21:08:14 -- bdev/blockdev.sh@18 -- # : 00:25:46.087 21:08:14 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:25:46.087 21:08:14 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:25:46.087 21:08:14 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:25:46.087 21:08:14 -- bdev/blockdev.sh@672 -- # uname -s 00:25:46.087 21:08:14 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:25:46.087 21:08:14 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:25:46.087 21:08:14 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:25:46.087 21:08:14 -- bdev/blockdev.sh@681 -- # crypto_device= 00:25:46.087 21:08:14 -- bdev/blockdev.sh@682 -- # dek= 00:25:46.087 21:08:14 -- bdev/blockdev.sh@683 -- # env_ctx= 00:25:46.087 21:08:14 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:25:46.087 21:08:14 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:25:46.087 21:08:14 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:25:46.087 21:08:14 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:25:46.087 21:08:14 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:25:46.087 21:08:14 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=131343 00:25:46.087 21:08:14 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:46.087 21:08:14 -- bdev/blockdev.sh@47 -- # waitforlisten 131343 00:25:46.087 21:08:14 -- common/autotest_common.sh@819 -- # '[' -z 131343 ']' 00:25:46.087 21:08:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.087 21:08:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:46.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.087 21:08:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.087 21:08:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:46.087 21:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:46.087 21:08:14 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:46.087 [2024-06-09 21:08:14.081389] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:46.087 [2024-06-09 21:08:14.081581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131343 ] 00:25:46.088 [2024-06-09 21:08:14.229075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.346 [2024-06-09 21:08:14.387347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:46.346 [2024-06-09 21:08:14.387601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.723 21:08:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:47.723 21:08:15 -- common/autotest_common.sh@852 -- # return 0 00:25:47.723 21:08:15 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:25:47.723 21:08:15 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:25:47.723 21:08:15 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:47.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:25:47.982 Waiting for block devices as requested 00:25:47.982 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:25:47.982 21:08:16 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:25:47.982 21:08:16 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:25:47.982 21:08:16 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:25:47.982 21:08:16 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:25:47.982 21:08:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:25:47.982 21:08:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:25:47.982 21:08:16 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:25:47.982 21:08:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:47.982 21:08:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:25:47.982 21:08:16 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:25:47.982 21:08:16 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:25:47.982 21:08:16 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:25:47.982 21:08:16 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:25:47.982 21:08:16 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:25:47.982 21:08:16 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:25:47.982 21:08:16 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:25:47.982 21:08:16 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:25:47.982 BYT; 00:25:47.982 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:25:47.982 21:08:16 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:25:47.982 BYT; 00:25:47.982 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:25:47.982 21:08:16 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:25:47.982 21:08:16 -- bdev/blockdev.sh@114 -- # break 00:25:47.982 21:08:16 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:25:47.982 21:08:16 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:25:47.982 21:08:16 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:25:47.982 21:08:16 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:25:48.240 21:08:16 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:25:48.240 21:08:16 -- scripts/common.sh@410 -- # local spdk_guid 00:25:48.240 21:08:16 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:25:48.240 21:08:16 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:48.240 21:08:16 -- scripts/common.sh@415 -- # IFS='()' 00:25:48.240 21:08:16 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:25:48.240 21:08:16 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:48.240 21:08:16 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:25:48.240 21:08:16 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:48.240 21:08:16 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:48.499 21:08:16 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:48.499 21:08:16 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:25:48.499 21:08:16 -- scripts/common.sh@422 -- # local spdk_guid 00:25:48.499 21:08:16 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:25:48.499 21:08:16 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:48.499 21:08:16 -- scripts/common.sh@427 -- # IFS='()' 00:25:48.499 21:08:16 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:25:48.499 21:08:16 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:48.499 21:08:16 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:25:48.499 21:08:16 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:48.499 21:08:16 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:48.499 21:08:16 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:48.499 21:08:16 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:25:49.433 The operation has completed successfully. 00:25:49.433 21:08:17 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:25:50.368 The operation has completed successfully. 00:25:50.368 21:08:18 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:50.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:25:50.934 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:25:51.869 21:08:19 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:25:51.869 21:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.869 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.869 [] 00:25:51.869 21:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.869 21:08:19 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:25:51.869 21:08:19 -- bdev/blockdev.sh@79 -- # local json 00:25:51.869 21:08:19 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:25:51.869 21:08:19 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:51.869 21:08:19 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:25:51.869 21:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.869 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.869 21:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.869 21:08:19 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:25:51.869 21:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.869 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.869 21:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.869 21:08:19 -- bdev/blockdev.sh@738 -- # cat 00:25:51.869 21:08:19 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:25:51.869 21:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.870 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.870 21:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.870 21:08:19 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:25:51.870 21:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.870 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.870 21:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.870 21:08:19 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:51.870 21:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.870 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.870 21:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.870 21:08:19 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:25:51.870 21:08:19 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:25:51.870 21:08:19 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:25:51.870 21:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.870 21:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.870 21:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.870 21:08:19 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:25:51.870 21:08:19 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:25:51.870 21:08:19 -- bdev/blockdev.sh@747 -- # jq -r .name 00:25:51.870 21:08:20 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:25:51.870 21:08:20 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:25:51.870 21:08:20 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:25:51.870 21:08:20 -- bdev/blockdev.sh@752 -- # killprocess 131343 00:25:51.870 21:08:20 -- common/autotest_common.sh@926 -- # '[' -z 131343 ']' 00:25:51.870 21:08:20 -- common/autotest_common.sh@930 -- # kill -0 131343 00:25:52.128 21:08:20 -- common/autotest_common.sh@931 -- # uname 00:25:52.128 21:08:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.128 21:08:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131343 00:25:52.128 21:08:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:52.128 21:08:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:52.128 21:08:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131343' 00:25:52.128 killing process with pid 131343 00:25:52.128 21:08:20 -- common/autotest_common.sh@945 -- # kill 131343 00:25:52.128 21:08:20 -- common/autotest_common.sh@950 -- # wait 131343 00:25:54.030 21:08:21 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:54.030 21:08:21 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:25:54.030 21:08:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:25:54.030 21:08:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:54.030 21:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:54.030 ************************************ 00:25:54.030 START TEST bdev_hello_world 00:25:54.030 ************************************ 00:25:54.030 21:08:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:25:54.030 [2024-06-09 21:08:21.883205] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:54.030 [2024-06-09 21:08:21.883803] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131778 ] 00:25:54.030 [2024-06-09 21:08:22.052773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.289 [2024-06-09 21:08:22.210419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.547 [2024-06-09 21:08:22.609131] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:54.547 [2024-06-09 21:08:22.609418] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:25:54.547 [2024-06-09 21:08:22.609547] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:54.547 [2024-06-09 21:08:22.612389] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:54.547 [2024-06-09 21:08:22.612996] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:54.547 [2024-06-09 21:08:22.613193] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:54.547 [2024-06-09 21:08:22.613566] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:54.547 00:25:54.547 [2024-06-09 21:08:22.613775] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:55.483 ************************************ 00:25:55.483 END TEST bdev_hello_world 00:25:55.483 ************************************ 00:25:55.483 00:25:55.483 real 0m1.707s 00:25:55.483 user 0m1.385s 00:25:55.483 sys 0m0.220s 00:25:55.483 21:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.483 21:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:55.483 21:08:23 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:25:55.483 21:08:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:55.483 21:08:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:55.483 21:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:55.483 ************************************ 00:25:55.483 START TEST bdev_bounds 00:25:55.483 ************************************ 00:25:55.483 21:08:23 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:25:55.483 21:08:23 -- bdev/blockdev.sh@288 -- # bdevio_pid=131829 00:25:55.483 21:08:23 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:55.483 21:08:23 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:55.483 21:08:23 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 131829' 00:25:55.483 Process bdevio pid: 131829 00:25:55.483 21:08:23 -- bdev/blockdev.sh@291 -- # waitforlisten 131829 00:25:55.483 21:08:23 -- common/autotest_common.sh@819 -- # '[' -z 131829 ']' 00:25:55.483 21:08:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.483 21:08:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:55.483 21:08:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.483 21:08:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:55.483 21:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:55.483 [2024-06-09 21:08:23.647508] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:55.483 [2024-06-09 21:08:23.647903] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131829 ] 00:25:55.742 [2024-06-09 21:08:23.823348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:56.000 [2024-06-09 21:08:23.984689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.000 [2024-06-09 21:08:23.984836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.000 [2024-06-09 21:08:23.984832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.567 21:08:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:56.567 21:08:24 -- common/autotest_common.sh@852 -- # return 0 00:25:56.567 21:08:24 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:56.567 I/O targets: 00:25:56.567 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:25:56.567 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:25:56.567 00:25:56.567 00:25:56.567 CUnit - A unit testing framework for C - Version 2.1-3 00:25:56.567 http://cunit.sourceforge.net/ 00:25:56.567 00:25:56.567 00:25:56.567 Suite: bdevio tests on: Nvme0n1p2 00:25:56.567 Test: blockdev write read block ...passed 00:25:56.567 Test: blockdev write zeroes read block ...passed 00:25:56.567 Test: blockdev write zeroes read no split ...passed 00:25:56.567 Test: blockdev write zeroes read split ...passed 00:25:56.567 Test: blockdev write zeroes read split partial ...passed 00:25:56.567 Test: blockdev reset ...[2024-06-09 21:08:24.691519] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:56.567 [2024-06-09 21:08:24.694803] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:56.567 passed 00:25:56.567 Test: blockdev write read 8 blocks ...passed 00:25:56.567 Test: blockdev write read size > 128k ...passed 00:25:56.567 Test: blockdev write read invalid size ...passed 00:25:56.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:56.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:56.567 Test: blockdev write read max offset ...passed 00:25:56.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:56.567 Test: blockdev writev readv 8 blocks ...passed 00:25:56.567 Test: blockdev writev readv 30 x 1block ...passed 00:25:56.567 Test: blockdev writev readv block ...passed 00:25:56.567 Test: blockdev writev readv size > 128k ...passed 00:25:56.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:56.568 Test: blockdev comparev and writev ...[2024-06-09 21:08:24.704966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x480b000 len:0x1000 00:25:56.568 [2024-06-09 21:08:24.705224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:56.568 passed 00:25:56.568 Test: blockdev nvme passthru rw ...passed 00:25:56.568 Test: blockdev nvme passthru vendor specific ...passed 00:25:56.568 Test: blockdev nvme admin passthru ...passed 00:25:56.568 Test: blockdev copy ...passed 00:25:56.568 Suite: bdevio tests on: Nvme0n1p1 00:25:56.568 Test: blockdev write read block ...passed 00:25:56.568 Test: blockdev write zeroes read block ...passed 00:25:56.568 Test: blockdev write zeroes read no split ...passed 00:25:56.568 Test: blockdev write zeroes read split ...passed 00:25:56.826 Test: blockdev write zeroes read split partial ...passed 00:25:56.826 Test: blockdev reset ...[2024-06-09 21:08:24.755521] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:56.826 [2024-06-09 21:08:24.758597] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:56.826 passed 00:25:56.826 Test: blockdev write read 8 blocks ...passed 00:25:56.826 Test: blockdev write read size > 128k ...passed 00:25:56.826 Test: blockdev write read invalid size ...passed 00:25:56.826 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:56.826 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:56.826 Test: blockdev write read max offset ...passed 00:25:56.826 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:56.826 Test: blockdev writev readv 8 blocks ...passed 00:25:56.826 Test: blockdev writev readv 30 x 1block ...passed 00:25:56.826 Test: blockdev writev readv block ...passed 00:25:56.826 Test: blockdev writev readv size > 128k ...passed 00:25:56.826 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:56.826 Test: blockdev comparev and writev ...[2024-06-09 21:08:24.768729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x480d000 len:0x1000 00:25:56.826 [2024-06-09 21:08:24.768958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:56.826 passed 00:25:56.826 Test: blockdev nvme passthru rw ...passed 00:25:56.826 Test: blockdev nvme passthru vendor specific ...passed 00:25:56.826 Test: blockdev nvme admin passthru ...passed 00:25:56.826 Test: blockdev copy ...passed 00:25:56.826 00:25:56.826 Run Summary: Type Total Ran Passed Failed Inactive 00:25:56.826 suites 2 2 n/a 0 0 00:25:56.826 tests 46 46 46 0 0 00:25:56.826 asserts 284 284 284 0 n/a 00:25:56.826 00:25:56.826 Elapsed time = 0.353 seconds 00:25:56.826 0 00:25:56.826 21:08:24 -- bdev/blockdev.sh@293 -- # killprocess 131829 00:25:56.826 21:08:24 -- common/autotest_common.sh@926 -- # '[' -z 131829 ']' 00:25:56.826 21:08:24 -- common/autotest_common.sh@930 -- # kill -0 131829 00:25:56.826 21:08:24 -- common/autotest_common.sh@931 -- # uname 00:25:56.826 21:08:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:56.826 21:08:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131829 00:25:56.826 killing process with pid 131829 00:25:56.826 21:08:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:56.826 21:08:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:56.826 21:08:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131829' 00:25:56.826 21:08:24 -- common/autotest_common.sh@945 -- # kill 131829 00:25:56.826 21:08:24 -- common/autotest_common.sh@950 -- # wait 131829 00:25:57.762 ************************************ 00:25:57.762 END TEST bdev_bounds 00:25:57.762 ************************************ 00:25:57.762 21:08:25 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:25:57.762 00:25:57.762 real 0m2.097s 00:25:57.762 user 0m4.891s 00:25:57.762 sys 0m0.358s 00:25:57.762 21:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.762 21:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:57.762 21:08:25 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:25:57.762 21:08:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:25:57.762 21:08:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:57.762 21:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:57.762 ************************************ 00:25:57.762 START TEST bdev_nbd 00:25:57.762 ************************************ 00:25:57.762 21:08:25 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:25:57.762 21:08:25 -- bdev/blockdev.sh@298 -- # uname -s 00:25:57.762 21:08:25 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:25:57.762 21:08:25 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:57.762 21:08:25 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:57.762 21:08:25 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:25:57.762 21:08:25 -- bdev/blockdev.sh@302 -- # local bdev_all 00:25:57.762 21:08:25 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:25:57.762 21:08:25 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:25:57.762 21:08:25 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:57.762 21:08:25 -- bdev/blockdev.sh@309 -- # local nbd_all 00:25:57.762 21:08:25 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:25:57.762 21:08:25 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:57.762 21:08:25 -- bdev/blockdev.sh@312 -- # local nbd_list 00:25:57.762 21:08:25 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:57.762 21:08:25 -- bdev/blockdev.sh@313 -- # local bdev_list 00:25:57.762 21:08:25 -- bdev/blockdev.sh@316 -- # nbd_pid=131885 00:25:57.762 21:08:25 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:57.762 21:08:25 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:57.762 21:08:25 -- bdev/blockdev.sh@318 -- # waitforlisten 131885 /var/tmp/spdk-nbd.sock 00:25:57.762 21:08:25 -- common/autotest_common.sh@819 -- # '[' -z 131885 ']' 00:25:57.762 21:08:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:57.762 21:08:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:57.762 21:08:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:57.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:57.762 21:08:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:57.762 21:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:57.762 [2024-06-09 21:08:25.787163] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:57.762 [2024-06-09 21:08:25.787482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.020 [2024-06-09 21:08:25.940820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.020 [2024-06-09 21:08:26.124971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.587 21:08:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:58.587 21:08:26 -- common/autotest_common.sh@852 -- # return 0 00:25:58.587 21:08:26 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@24 -- # local i 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:25:58.587 21:08:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:25:58.845 21:08:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:58.845 21:08:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:58.845 21:08:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:58.845 21:08:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:58.845 21:08:27 -- common/autotest_common.sh@857 -- # local i 00:25:58.845 21:08:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:58.845 21:08:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:58.845 21:08:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:58.845 21:08:27 -- common/autotest_common.sh@861 -- # break 00:25:58.845 21:08:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:58.845 21:08:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:58.845 21:08:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.845 1+0 records in 00:25:58.845 1+0 records out 00:25:58.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820701 s, 5.0 MB/s 00:25:58.845 21:08:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.103 21:08:27 -- common/autotest_common.sh@874 -- # size=4096 00:25:59.103 21:08:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.103 21:08:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:59.103 21:08:27 -- common/autotest_common.sh@877 -- # return 0 00:25:59.103 21:08:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:59.103 21:08:27 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:25:59.104 21:08:27 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:25:59.362 21:08:27 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:25:59.362 21:08:27 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:25:59.362 21:08:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:25:59.362 21:08:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:59.362 21:08:27 -- common/autotest_common.sh@857 -- # local i 00:25:59.362 21:08:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:59.362 21:08:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:59.362 21:08:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:59.362 21:08:27 -- common/autotest_common.sh@861 -- # break 00:25:59.362 21:08:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:59.362 21:08:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:59.362 21:08:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:59.362 1+0 records in 00:25:59.362 1+0 records out 00:25:59.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729658 s, 5.6 MB/s 00:25:59.362 21:08:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.362 21:08:27 -- common/autotest_common.sh@874 -- # size=4096 00:25:59.362 21:08:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.362 21:08:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:59.362 21:08:27 -- common/autotest_common.sh@877 -- # return 0 00:25:59.362 21:08:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:59.362 21:08:27 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:25:59.362 21:08:27 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:59.621 { 00:25:59.621 "nbd_device": "/dev/nbd0", 00:25:59.621 "bdev_name": "Nvme0n1p1" 00:25:59.621 }, 00:25:59.621 { 00:25:59.621 "nbd_device": "/dev/nbd1", 00:25:59.621 "bdev_name": "Nvme0n1p2" 00:25:59.621 } 00:25:59.621 ]' 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:59.621 { 00:25:59.621 "nbd_device": "/dev/nbd0", 00:25:59.621 "bdev_name": "Nvme0n1p1" 00:25:59.621 }, 00:25:59.621 { 00:25:59.621 "nbd_device": "/dev/nbd1", 00:25:59.621 "bdev_name": "Nvme0n1p2" 00:25:59.621 } 00:25:59.621 ]' 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@51 -- # local i 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.621 21:08:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@41 -- # break 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.880 21:08:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@41 -- # break 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:59.880 21:08:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:00.139 21:08:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:00.139 21:08:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:00.139 21:08:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@65 -- # true 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@65 -- # count=0 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@122 -- # count=0 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@127 -- # return 0 00:26:00.399 21:08:28 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@12 -- # local i 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:00.399 21:08:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:26:00.658 /dev/nbd0 00:26:00.658 21:08:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:00.658 21:08:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:00.658 21:08:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:00.658 21:08:28 -- common/autotest_common.sh@857 -- # local i 00:26:00.658 21:08:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:00.658 21:08:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:00.658 21:08:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:00.658 21:08:28 -- common/autotest_common.sh@861 -- # break 00:26:00.658 21:08:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:00.658 21:08:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:00.658 21:08:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:00.658 1+0 records in 00:26:00.658 1+0 records out 00:26:00.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000985331 s, 4.2 MB/s 00:26:00.658 21:08:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.658 21:08:28 -- common/autotest_common.sh@874 -- # size=4096 00:26:00.658 21:08:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.658 21:08:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:00.658 21:08:28 -- common/autotest_common.sh@877 -- # return 0 00:26:00.658 21:08:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.658 21:08:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:00.658 21:08:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:26:00.917 /dev/nbd1 00:26:00.917 21:08:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:00.917 21:08:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:00.917 21:08:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:00.917 21:08:28 -- common/autotest_common.sh@857 -- # local i 00:26:00.917 21:08:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:00.917 21:08:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:00.917 21:08:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:00.917 21:08:28 -- common/autotest_common.sh@861 -- # break 00:26:00.917 21:08:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:00.917 21:08:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:00.917 21:08:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:00.917 1+0 records in 00:26:00.917 1+0 records out 00:26:00.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785905 s, 5.2 MB/s 00:26:00.917 21:08:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.917 21:08:28 -- common/autotest_common.sh@874 -- # size=4096 00:26:00.917 21:08:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.917 21:08:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:00.917 21:08:28 -- common/autotest_common.sh@877 -- # return 0 00:26:00.917 21:08:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.917 21:08:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:00.917 21:08:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:00.917 21:08:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:00.917 21:08:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:01.177 { 00:26:01.177 "nbd_device": "/dev/nbd0", 00:26:01.177 "bdev_name": "Nvme0n1p1" 00:26:01.177 }, 00:26:01.177 { 00:26:01.177 "nbd_device": "/dev/nbd1", 00:26:01.177 "bdev_name": "Nvme0n1p2" 00:26:01.177 } 00:26:01.177 ]' 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:01.177 { 00:26:01.177 "nbd_device": "/dev/nbd0", 00:26:01.177 "bdev_name": "Nvme0n1p1" 00:26:01.177 }, 00:26:01.177 { 00:26:01.177 "nbd_device": "/dev/nbd1", 00:26:01.177 "bdev_name": "Nvme0n1p2" 00:26:01.177 } 00:26:01.177 ]' 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:01.177 /dev/nbd1' 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:01.177 /dev/nbd1' 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@65 -- # count=2 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@95 -- # count=2 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:01.177 256+0 records in 00:26:01.177 256+0 records out 00:26:01.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110795 s, 94.6 MB/s 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:01.177 256+0 records in 00:26:01.177 256+0 records out 00:26:01.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0763909 s, 13.7 MB/s 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:01.177 256+0 records in 00:26:01.177 256+0 records out 00:26:01.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0878275 s, 11.9 MB/s 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:01.177 21:08:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@51 -- # local i 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.436 21:08:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@41 -- # break 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.696 21:08:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@41 -- # break 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:01.955 21:08:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@65 -- # true 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@65 -- # count=0 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@104 -- # count=0 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@109 -- # return 0 00:26:02.214 21:08:30 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:02.214 21:08:30 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:02.473 malloc_lvol_verify 00:26:02.473 21:08:30 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:02.732 f32bb332-8343-475c-8ea7-2f4c5d5ff581 00:26:02.732 21:08:30 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:02.992 a1260996-e2b2-4557-9eba-b10dbf67829b 00:26:02.992 21:08:30 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:02.992 /dev/nbd0 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:02.992 mke2fs 1.46.5 (30-Dec-2021) 00:26:02.992 00:26:02.992 Filesystem too small for a journal 00:26:02.992 Discarding device blocks: 0/1024 done 00:26:02.992 Creating filesystem with 1024 4k blocks and 1024 inodes 00:26:02.992 00:26:02.992 Allocating group tables: 0/1 done 00:26:02.992 Writing inode tables: 0/1 done 00:26:02.992 Writing superblocks and filesystem accounting information: 0/1 done 00:26:02.992 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@51 -- # local i 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:02.992 21:08:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@41 -- # break 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@45 -- # return 0 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:03.251 21:08:31 -- bdev/nbd_common.sh@147 -- # return 0 00:26:03.251 21:08:31 -- bdev/blockdev.sh@324 -- # killprocess 131885 00:26:03.251 21:08:31 -- common/autotest_common.sh@926 -- # '[' -z 131885 ']' 00:26:03.251 21:08:31 -- common/autotest_common.sh@930 -- # kill -0 131885 00:26:03.251 21:08:31 -- common/autotest_common.sh@931 -- # uname 00:26:03.251 21:08:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:03.251 21:08:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131885 00:26:03.251 21:08:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:03.251 21:08:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:03.251 21:08:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131885' 00:26:03.251 killing process with pid 131885 00:26:03.251 21:08:31 -- common/autotest_common.sh@945 -- # kill 131885 00:26:03.251 21:08:31 -- common/autotest_common.sh@950 -- # wait 131885 00:26:04.185 ************************************ 00:26:04.185 END TEST bdev_nbd 00:26:04.185 ************************************ 00:26:04.185 21:08:32 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:26:04.185 00:26:04.185 real 0m6.607s 00:26:04.185 user 0m9.730s 00:26:04.185 sys 0m1.426s 00:26:04.185 21:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.185 21:08:32 -- common/autotest_common.sh@10 -- # set +x 00:26:04.442 21:08:32 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:26:04.442 21:08:32 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:26:04.442 21:08:32 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:26:04.442 21:08:32 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:26:04.442 skipping fio tests on NVMe due to multi-ns failures. 00:26:04.442 21:08:32 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:04.442 21:08:32 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:04.442 21:08:32 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:26:04.442 21:08:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:04.442 21:08:32 -- common/autotest_common.sh@10 -- # set +x 00:26:04.442 ************************************ 00:26:04.442 START TEST bdev_verify 00:26:04.442 ************************************ 00:26:04.442 21:08:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:04.442 [2024-06-09 21:08:32.441809] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:04.442 [2024-06-09 21:08:32.442108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132137 ] 00:26:04.442 [2024-06-09 21:08:32.596996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:04.701 [2024-06-09 21:08:32.756459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.701 [2024-06-09 21:08:32.756456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.278 Running I/O for 5 seconds... 00:26:10.565 00:26:10.565 Latency(us) 00:26:10.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.565 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:10.565 Verification LBA range: start 0x0 length 0x4ff80 00:26:10.565 Nvme0n1p1 : 5.02 7649.74 29.88 0.00 0.00 16685.89 1146.88 25618.62 00:26:10.565 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:10.565 Verification LBA range: start 0x4ff80 length 0x4ff80 00:26:10.565 Nvme0n1p1 : 5.02 7653.22 29.90 0.00 0.00 16680.78 1563.93 28120.90 00:26:10.565 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:10.565 Verification LBA range: start 0x0 length 0x4ff7f 00:26:10.565 Nvme0n1p2 : 5.02 7646.14 29.87 0.00 0.00 16671.71 1273.48 24188.74 00:26:10.565 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:10.565 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:26:10.565 Nvme0n1p2 : 5.02 7649.36 29.88 0.00 0.00 16666.28 990.49 25737.77 00:26:10.565 =================================================================================================================== 00:26:10.565 Total : 30598.46 119.53 0.00 0.00 16676.17 990.49 28120.90 00:26:13.848 ************************************ 00:26:13.848 END TEST bdev_verify 00:26:13.848 ************************************ 00:26:13.848 00:26:13.848 real 0m9.565s 00:26:13.848 user 0m18.090s 00:26:13.848 sys 0m0.237s 00:26:13.848 21:08:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.848 21:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:13.848 21:08:42 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:13.848 21:08:42 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:26:13.848 21:08:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:13.848 21:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:13.848 ************************************ 00:26:13.848 START TEST bdev_verify_big_io 00:26:13.848 ************************************ 00:26:13.848 21:08:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:14.106 [2024-06-09 21:08:42.063490] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:14.107 [2024-06-09 21:08:42.063809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132262 ] 00:26:14.107 [2024-06-09 21:08:42.218785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:14.364 [2024-06-09 21:08:42.400269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.364 [2024-06-09 21:08:42.400279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.929 Running I/O for 5 seconds... 00:26:20.195 00:26:20.195 Latency(us) 00:26:20.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.195 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:20.195 Verification LBA range: start 0x0 length 0x4ff8 00:26:20.195 Nvme0n1p1 : 5.11 859.13 53.70 0.00 0.00 147273.06 2561.86 208761.95 00:26:20.195 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:20.195 Verification LBA range: start 0x4ff8 length 0x4ff8 00:26:20.195 Nvme0n1p1 : 5.11 827.06 51.69 0.00 0.00 152961.46 2427.81 216387.96 00:26:20.196 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:20.196 Verification LBA range: start 0x0 length 0x4ff7 00:26:20.196 Nvme0n1p2 : 5.12 867.34 54.21 0.00 0.00 144208.82 930.91 160146.15 00:26:20.196 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:20.196 Verification LBA range: start 0x4ff7 length 0x4ff7 00:26:20.196 Nvme0n1p2 : 5.11 834.39 52.15 0.00 0.00 149766.37 1333.06 163005.91 00:26:20.196 =================================================================================================================== 00:26:20.196 Total : 3387.93 211.75 0.00 0.00 148489.16 930.91 216387.96 00:26:21.132 ************************************ 00:26:21.132 END TEST bdev_verify_big_io 00:26:21.132 ************************************ 00:26:21.132 00:26:21.132 real 0m7.262s 00:26:21.132 user 0m13.445s 00:26:21.132 sys 0m0.259s 00:26:21.132 21:08:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.132 21:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:21.391 21:08:49 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:21.391 21:08:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:21.391 21:08:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.391 21:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:21.391 ************************************ 00:26:21.391 START TEST bdev_write_zeroes 00:26:21.391 ************************************ 00:26:21.391 21:08:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:21.391 [2024-06-09 21:08:49.383051] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:21.391 [2024-06-09 21:08:49.383449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132370 ] 00:26:21.391 [2024-06-09 21:08:49.541030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.650 [2024-06-09 21:08:49.699742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.216 Running I/O for 1 seconds... 00:26:23.147 00:26:23.147 Latency(us) 00:26:23.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.147 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:23.147 Nvme0n1p1 : 1.00 28905.87 112.91 0.00 0.00 4419.32 2353.34 15847.80 00:26:23.148 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:23.148 Nvme0n1p2 : 1.01 28838.61 112.65 0.00 0.00 4423.01 2442.71 14537.08 00:26:23.148 =================================================================================================================== 00:26:23.148 Total : 57744.48 225.56 0.00 0.00 4421.17 2353.34 15847.80 00:26:24.081 ************************************ 00:26:24.081 END TEST bdev_write_zeroes 00:26:24.081 ************************************ 00:26:24.081 00:26:24.081 real 0m2.627s 00:26:24.081 user 0m2.314s 00:26:24.081 sys 0m0.212s 00:26:24.081 21:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.081 21:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:24.081 21:08:52 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:24.081 21:08:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:24.081 21:08:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:24.081 21:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:24.081 ************************************ 00:26:24.081 START TEST bdev_json_nonenclosed 00:26:24.081 ************************************ 00:26:24.081 21:08:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:24.081 [2024-06-09 21:08:52.075936] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:24.081 [2024-06-09 21:08:52.076115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132427 ] 00:26:24.081 [2024-06-09 21:08:52.238951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.340 [2024-06-09 21:08:52.423455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.340 [2024-06-09 21:08:52.423653] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:24.340 [2024-06-09 21:08:52.423693] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:24.597 ************************************ 00:26:24.597 END TEST bdev_json_nonenclosed 00:26:24.597 ************************************ 00:26:24.597 00:26:24.597 real 0m0.725s 00:26:24.597 user 0m0.497s 00:26:24.597 sys 0m0.128s 00:26:24.597 21:08:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.597 21:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:24.855 21:08:52 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:24.855 21:08:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:26:24.855 21:08:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:24.855 21:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:24.855 ************************************ 00:26:24.855 START TEST bdev_json_nonarray 00:26:24.855 ************************************ 00:26:24.855 21:08:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:24.855 [2024-06-09 21:08:52.836840] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:24.855 [2024-06-09 21:08:52.837000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132456 ] 00:26:24.855 [2024-06-09 21:08:52.986958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.114 [2024-06-09 21:08:53.143960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.114 [2024-06-09 21:08:53.144440] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:25.114 [2024-06-09 21:08:53.144596] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:25.373 00:26:25.373 real 0m0.672s 00:26:25.373 user 0m0.452s 00:26:25.373 sys 0m0.120s 00:26:25.373 21:08:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.373 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:25.373 ************************************ 00:26:25.373 END TEST bdev_json_nonarray 00:26:25.373 ************************************ 00:26:25.373 21:08:53 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:26:25.373 21:08:53 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:26:25.373 21:08:53 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:26:25.373 21:08:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:25.373 21:08:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.373 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:25.373 ************************************ 00:26:25.373 START TEST bdev_gpt_uuid 00:26:25.373 ************************************ 00:26:25.373 21:08:53 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:26:25.373 21:08:53 -- bdev/blockdev.sh@612 -- # local bdev 00:26:25.373 21:08:53 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:26:25.373 21:08:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=132495 00:26:25.373 21:08:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:25.373 21:08:53 -- bdev/blockdev.sh@47 -- # waitforlisten 132495 00:26:25.373 21:08:53 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:25.373 21:08:53 -- common/autotest_common.sh@819 -- # '[' -z 132495 ']' 00:26:25.373 21:08:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.373 21:08:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:25.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.373 21:08:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.373 21:08:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:25.373 21:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:25.631 [2024-06-09 21:08:53.578857] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:25.631 [2024-06-09 21:08:53.579179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132495 ] 00:26:25.631 [2024-06-09 21:08:53.734700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.890 [2024-06-09 21:08:53.905976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:25.890 [2024-06-09 21:08:53.906459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.267 21:08:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.267 21:08:55 -- common/autotest_common.sh@852 -- # return 0 00:26:27.267 21:08:55 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:27.267 21:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.267 21:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.267 Some configs were skipped because the RPC state that can call them passed over. 00:26:27.267 21:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.267 21:08:55 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:26:27.267 21:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.267 21:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.267 21:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.267 21:08:55 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:26:27.267 21:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.267 21:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.267 21:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.267 21:08:55 -- bdev/blockdev.sh@619 -- # bdev='[ 00:26:27.267 { 00:26:27.267 "name": "Nvme0n1p1", 00:26:27.267 "aliases": [ 00:26:27.267 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:26:27.267 ], 00:26:27.267 "product_name": "GPT Disk", 00:26:27.267 "block_size": 4096, 00:26:27.267 "num_blocks": 655104, 00:26:27.267 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:26:27.267 "assigned_rate_limits": { 00:26:27.267 "rw_ios_per_sec": 0, 00:26:27.267 "rw_mbytes_per_sec": 0, 00:26:27.267 "r_mbytes_per_sec": 0, 00:26:27.267 "w_mbytes_per_sec": 0 00:26:27.267 }, 00:26:27.267 "claimed": false, 00:26:27.267 "zoned": false, 00:26:27.267 "supported_io_types": { 00:26:27.267 "read": true, 00:26:27.267 "write": true, 00:26:27.267 "unmap": true, 00:26:27.267 "write_zeroes": true, 00:26:27.267 "flush": true, 00:26:27.267 "reset": true, 00:26:27.267 "compare": true, 00:26:27.267 "compare_and_write": false, 00:26:27.267 "abort": true, 00:26:27.267 "nvme_admin": false, 00:26:27.267 "nvme_io": false 00:26:27.267 }, 00:26:27.267 "driver_specific": { 00:26:27.267 "gpt": { 00:26:27.267 "base_bdev": "Nvme0n1", 00:26:27.267 "offset_blocks": 256, 00:26:27.267 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:26:27.267 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:26:27.267 "partition_name": "SPDK_TEST_first" 00:26:27.267 } 00:26:27.267 } 00:26:27.267 } 00:26:27.267 ]' 00:26:27.267 21:08:55 -- bdev/blockdev.sh@620 -- # jq -r length 00:26:27.267 21:08:55 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:26:27.267 21:08:55 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:26:27.267 21:08:55 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:26:27.267 21:08:55 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:26:27.526 21:08:55 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:26:27.526 21:08:55 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:26:27.526 21:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.526 21:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.526 21:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.526 21:08:55 -- bdev/blockdev.sh@624 -- # bdev='[ 00:26:27.526 { 00:26:27.526 "name": "Nvme0n1p2", 00:26:27.526 "aliases": [ 00:26:27.526 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:26:27.526 ], 00:26:27.526 "product_name": "GPT Disk", 00:26:27.526 "block_size": 4096, 00:26:27.526 "num_blocks": 655103, 00:26:27.526 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:26:27.526 "assigned_rate_limits": { 00:26:27.526 "rw_ios_per_sec": 0, 00:26:27.526 "rw_mbytes_per_sec": 0, 00:26:27.526 "r_mbytes_per_sec": 0, 00:26:27.526 "w_mbytes_per_sec": 0 00:26:27.526 }, 00:26:27.526 "claimed": false, 00:26:27.526 "zoned": false, 00:26:27.526 "supported_io_types": { 00:26:27.526 "read": true, 00:26:27.526 "write": true, 00:26:27.526 "unmap": true, 00:26:27.526 "write_zeroes": true, 00:26:27.526 "flush": true, 00:26:27.526 "reset": true, 00:26:27.526 "compare": true, 00:26:27.526 "compare_and_write": false, 00:26:27.526 "abort": true, 00:26:27.526 "nvme_admin": false, 00:26:27.526 "nvme_io": false 00:26:27.526 }, 00:26:27.526 "driver_specific": { 00:26:27.526 "gpt": { 00:26:27.526 "base_bdev": "Nvme0n1", 00:26:27.526 "offset_blocks": 655360, 00:26:27.526 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:26:27.526 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:26:27.526 "partition_name": "SPDK_TEST_second" 00:26:27.526 } 00:26:27.526 } 00:26:27.526 } 00:26:27.526 ]' 00:26:27.526 21:08:55 -- bdev/blockdev.sh@625 -- # jq -r length 00:26:27.526 21:08:55 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:26:27.526 21:08:55 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:26:27.526 21:08:55 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:26:27.526 21:08:55 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:26:27.526 21:08:55 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:26:27.526 21:08:55 -- bdev/blockdev.sh@629 -- # killprocess 132495 00:26:27.526 21:08:55 -- common/autotest_common.sh@926 -- # '[' -z 132495 ']' 00:26:27.526 21:08:55 -- common/autotest_common.sh@930 -- # kill -0 132495 00:26:27.526 21:08:55 -- common/autotest_common.sh@931 -- # uname 00:26:27.526 21:08:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:27.526 21:08:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132495 00:26:27.526 21:08:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:27.526 21:08:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:27.526 21:08:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132495' 00:26:27.526 killing process with pid 132495 00:26:27.526 21:08:55 -- common/autotest_common.sh@945 -- # kill 132495 00:26:27.526 21:08:55 -- common/autotest_common.sh@950 -- # wait 132495 00:26:29.428 00:26:29.428 real 0m3.924s 00:26:29.428 user 0m4.368s 00:26:29.428 sys 0m0.426s 00:26:29.428 21:08:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.428 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:29.428 ************************************ 00:26:29.428 END TEST bdev_gpt_uuid 00:26:29.428 ************************************ 00:26:29.428 21:08:57 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:26:29.428 21:08:57 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:26:29.428 21:08:57 -- bdev/blockdev.sh@809 -- # cleanup 00:26:29.428 21:08:57 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:29.428 21:08:57 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:29.428 21:08:57 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:26:29.428 21:08:57 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:26:29.428 21:08:57 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:26:29.428 21:08:57 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:29.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:29.687 Waiting for block devices as requested 00:26:29.687 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:29.946 21:08:57 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:26:29.946 21:08:57 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:26:29.946 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:26:29.946 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:26:29.946 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:26:29.946 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:26:29.946 21:08:57 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:26:29.946 00:26:29.946 real 0m44.002s 00:26:29.946 user 1m4.208s 00:26:29.946 sys 0m5.589s 00:26:29.946 21:08:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.946 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 ************************************ 00:26:29.946 END TEST blockdev_nvme_gpt 00:26:29.946 ************************************ 00:26:29.946 21:08:57 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:26:29.946 21:08:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:29.946 21:08:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:29.946 21:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:29.946 ************************************ 00:26:29.946 START TEST nvme 00:26:29.946 ************************************ 00:26:29.946 21:08:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:26:29.946 * Looking for test storage... 00:26:29.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:29.946 21:08:58 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:30.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:30.513 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:31.447 21:08:59 -- nvme/nvme.sh@79 -- # uname 00:26:31.447 21:08:59 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:26:31.447 21:08:59 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:26:31.447 21:08:59 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:26:31.447 21:08:59 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:26:31.447 21:08:59 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:26:31.447 21:08:59 -- common/autotest_common.sh@1045 -- # echo 0 00:26:31.447 21:08:59 -- common/autotest_common.sh@1047 -- # stubpid=132901 00:26:31.447 21:08:59 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:26:31.447 Waiting for stub to ready for secondary processes... 00:26:31.447 21:08:59 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:26:31.447 21:08:59 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:31.447 21:08:59 -- common/autotest_common.sh@1051 -- # [[ -e /proc/132901 ]] 00:26:31.447 21:08:59 -- common/autotest_common.sh@1052 -- # sleep 1s 00:26:31.705 [2024-06-09 21:08:59.626340] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:31.705 [2024-06-09 21:08:59.626559] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.651 21:09:00 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:32.651 21:09:00 -- common/autotest_common.sh@1051 -- # [[ -e /proc/132901 ]] 00:26:32.651 21:09:00 -- common/autotest_common.sh@1052 -- # sleep 1s 00:26:32.910 [2024-06-09 21:09:00.900044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:33.167 [2024-06-09 21:09:01.100809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.167 [2024-06-09 21:09:01.100945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.167 [2024-06-09 21:09:01.100950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.167 [2024-06-09 21:09:01.115654] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:26:33.167 [2024-06-09 21:09:01.121052] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:26:33.167 [2024-06-09 21:09:01.121522] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:26:33.425 21:09:01 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:33.426 done. 00:26:33.426 21:09:01 -- common/autotest_common.sh@1054 -- # echo done. 00:26:33.426 21:09:01 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:26:33.426 21:09:01 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:26:33.426 21:09:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:33.426 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:33.426 ************************************ 00:26:33.426 START TEST nvme_reset 00:26:33.426 ************************************ 00:26:33.426 21:09:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:26:33.994 Initializing NVMe Controllers 00:26:33.994 Skipping QEMU NVMe SSD at 0000:00:06.0 00:26:33.994 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:26:33.994 00:26:33.994 real 0m0.290s 00:26:33.994 user 0m0.085s 00:26:33.994 sys 0m0.148s 00:26:33.994 21:09:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.994 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:33.994 ************************************ 00:26:33.994 END TEST nvme_reset 00:26:33.994 ************************************ 00:26:33.994 21:09:01 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:26:33.994 21:09:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:33.994 21:09:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:33.994 21:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:33.994 ************************************ 00:26:33.994 START TEST nvme_identify 00:26:33.994 ************************************ 00:26:33.994 21:09:01 -- common/autotest_common.sh@1104 -- # nvme_identify 00:26:33.994 21:09:01 -- nvme/nvme.sh@12 -- # bdfs=() 00:26:33.994 21:09:01 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:26:33.994 21:09:01 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:26:33.994 21:09:01 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:26:33.994 21:09:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:33.994 21:09:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:26:33.994 21:09:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:33.994 21:09:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:33.994 21:09:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:33.994 21:09:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:26:33.994 21:09:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:26:33.994 21:09:01 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:26:34.253 [2024-06-09 21:09:02.229418] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 132941 terminated unexpected 00:26:34.253 ===================================================== 00:26:34.253 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:34.253 ===================================================== 00:26:34.253 Controller Capabilities/Features 00:26:34.253 ================================ 00:26:34.253 Vendor ID: 1b36 00:26:34.253 Subsystem Vendor ID: 1af4 00:26:34.253 Serial Number: 12340 00:26:34.253 Model Number: QEMU NVMe Ctrl 00:26:34.253 Firmware Version: 8.0.0 00:26:34.253 Recommended Arb Burst: 6 00:26:34.253 IEEE OUI Identifier: 00 54 52 00:26:34.253 Multi-path I/O 00:26:34.253 May have multiple subsystem ports: No 00:26:34.253 May have multiple controllers: No 00:26:34.253 Associated with SR-IOV VF: No 00:26:34.253 Max Data Transfer Size: 524288 00:26:34.253 Max Number of Namespaces: 256 00:26:34.253 Max Number of I/O Queues: 64 00:26:34.253 NVMe Specification Version (VS): 1.4 00:26:34.253 NVMe Specification Version (Identify): 1.4 00:26:34.253 Maximum Queue Entries: 2048 00:26:34.253 Contiguous Queues Required: Yes 00:26:34.253 Arbitration Mechanisms Supported 00:26:34.253 Weighted Round Robin: Not Supported 00:26:34.253 Vendor Specific: Not Supported 00:26:34.253 Reset Timeout: 7500 ms 00:26:34.253 Doorbell Stride: 4 bytes 00:26:34.253 NVM Subsystem Reset: Not Supported 00:26:34.253 Command Sets Supported 00:26:34.253 NVM Command Set: Supported 00:26:34.253 Boot Partition: Not Supported 00:26:34.253 Memory Page Size Minimum: 4096 bytes 00:26:34.253 Memory Page Size Maximum: 65536 bytes 00:26:34.253 Persistent Memory Region: Not Supported 00:26:34.253 Optional Asynchronous Events Supported 00:26:34.253 Namespace Attribute Notices: Supported 00:26:34.253 Firmware Activation Notices: Not Supported 00:26:34.253 ANA Change Notices: Not Supported 00:26:34.253 PLE Aggregate Log Change Notices: Not Supported 00:26:34.253 LBA Status Info Alert Notices: Not Supported 00:26:34.253 EGE Aggregate Log Change Notices: Not Supported 00:26:34.253 Normal NVM Subsystem Shutdown event: Not Supported 00:26:34.254 Zone Descriptor Change Notices: Not Supported 00:26:34.254 Discovery Log Change Notices: Not Supported 00:26:34.254 Controller Attributes 00:26:34.254 128-bit Host Identifier: Not Supported 00:26:34.254 Non-Operational Permissive Mode: Not Supported 00:26:34.254 NVM Sets: Not Supported 00:26:34.254 Read Recovery Levels: Not Supported 00:26:34.254 Endurance Groups: Not Supported 00:26:34.254 Predictable Latency Mode: Not Supported 00:26:34.254 Traffic Based Keep ALive: Not Supported 00:26:34.254 Namespace Granularity: Not Supported 00:26:34.254 SQ Associations: Not Supported 00:26:34.254 UUID List: Not Supported 00:26:34.254 Multi-Domain Subsystem: Not Supported 00:26:34.254 Fixed Capacity Management: Not Supported 00:26:34.254 Variable Capacity Management: Not Supported 00:26:34.254 Delete Endurance Group: Not Supported 00:26:34.254 Delete NVM Set: Not Supported 00:26:34.254 Extended LBA Formats Supported: Supported 00:26:34.254 Flexible Data Placement Supported: Not Supported 00:26:34.254 00:26:34.254 Controller Memory Buffer Support 00:26:34.254 ================================ 00:26:34.254 Supported: No 00:26:34.254 00:26:34.254 Persistent Memory Region Support 00:26:34.254 ================================ 00:26:34.254 Supported: No 00:26:34.254 00:26:34.254 Admin Command Set Attributes 00:26:34.254 ============================ 00:26:34.254 Security Send/Receive: Not Supported 00:26:34.254 Format NVM: Supported 00:26:34.254 Firmware Activate/Download: Not Supported 00:26:34.254 Namespace Management: Supported 00:26:34.254 Device Self-Test: Not Supported 00:26:34.254 Directives: Supported 00:26:34.254 NVMe-MI: Not Supported 00:26:34.254 Virtualization Management: Not Supported 00:26:34.254 Doorbell Buffer Config: Supported 00:26:34.254 Get LBA Status Capability: Not Supported 00:26:34.254 Command & Feature Lockdown Capability: Not Supported 00:26:34.254 Abort Command Limit: 4 00:26:34.254 Async Event Request Limit: 4 00:26:34.254 Number of Firmware Slots: N/A 00:26:34.254 Firmware Slot 1 Read-Only: N/A 00:26:34.254 Firmware Activation Without Reset: N/A 00:26:34.254 Multiple Update Detection Support: N/A 00:26:34.254 Firmware Update Granularity: No Information Provided 00:26:34.254 Per-Namespace SMART Log: Yes 00:26:34.254 Asymmetric Namespace Access Log Page: Not Supported 00:26:34.254 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:26:34.254 Command Effects Log Page: Supported 00:26:34.254 Get Log Page Extended Data: Supported 00:26:34.254 Telemetry Log Pages: Not Supported 00:26:34.254 Persistent Event Log Pages: Not Supported 00:26:34.254 Supported Log Pages Log Page: May Support 00:26:34.254 Commands Supported & Effects Log Page: Not Supported 00:26:34.254 Feature Identifiers & Effects Log Page:May Support 00:26:34.254 NVMe-MI Commands & Effects Log Page: May Support 00:26:34.254 Data Area 4 for Telemetry Log: Not Supported 00:26:34.254 Error Log Page Entries Supported: 1 00:26:34.254 Keep Alive: Not Supported 00:26:34.254 00:26:34.254 NVM Command Set Attributes 00:26:34.254 ========================== 00:26:34.254 Submission Queue Entry Size 00:26:34.254 Max: 64 00:26:34.254 Min: 64 00:26:34.254 Completion Queue Entry Size 00:26:34.254 Max: 16 00:26:34.254 Min: 16 00:26:34.254 Number of Namespaces: 256 00:26:34.254 Compare Command: Supported 00:26:34.254 Write Uncorrectable Command: Not Supported 00:26:34.254 Dataset Management Command: Supported 00:26:34.254 Write Zeroes Command: Supported 00:26:34.254 Set Features Save Field: Supported 00:26:34.254 Reservations: Not Supported 00:26:34.254 Timestamp: Supported 00:26:34.254 Copy: Supported 00:26:34.254 Volatile Write Cache: Present 00:26:34.254 Atomic Write Unit (Normal): 1 00:26:34.254 Atomic Write Unit (PFail): 1 00:26:34.254 Atomic Compare & Write Unit: 1 00:26:34.254 Fused Compare & Write: Not Supported 00:26:34.254 Scatter-Gather List 00:26:34.254 SGL Command Set: Supported 00:26:34.254 SGL Keyed: Not Supported 00:26:34.254 SGL Bit Bucket Descriptor: Not Supported 00:26:34.254 SGL Metadata Pointer: Not Supported 00:26:34.254 Oversized SGL: Not Supported 00:26:34.254 SGL Metadata Address: Not Supported 00:26:34.254 SGL Offset: Not Supported 00:26:34.254 Transport SGL Data Block: Not Supported 00:26:34.254 Replay Protected Memory Block: Not Supported 00:26:34.254 00:26:34.254 Firmware Slot Information 00:26:34.254 ========================= 00:26:34.254 Active slot: 1 00:26:34.254 Slot 1 Firmware Revision: 1.0 00:26:34.254 00:26:34.254 00:26:34.254 Commands Supported and Effects 00:26:34.254 ============================== 00:26:34.254 Admin Commands 00:26:34.254 -------------- 00:26:34.254 Delete I/O Submission Queue (00h): Supported 00:26:34.254 Create I/O Submission Queue (01h): Supported 00:26:34.254 Get Log Page (02h): Supported 00:26:34.254 Delete I/O Completion Queue (04h): Supported 00:26:34.254 Create I/O Completion Queue (05h): Supported 00:26:34.254 Identify (06h): Supported 00:26:34.254 Abort (08h): Supported 00:26:34.254 Set Features (09h): Supported 00:26:34.254 Get Features (0Ah): Supported 00:26:34.254 Asynchronous Event Request (0Ch): Supported 00:26:34.254 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:34.254 Directive Send (19h): Supported 00:26:34.254 Directive Receive (1Ah): Supported 00:26:34.254 Virtualization Management (1Ch): Supported 00:26:34.254 Doorbell Buffer Config (7Ch): Supported 00:26:34.254 Format NVM (80h): Supported LBA-Change 00:26:34.254 I/O Commands 00:26:34.254 ------------ 00:26:34.254 Flush (00h): Supported LBA-Change 00:26:34.254 Write (01h): Supported LBA-Change 00:26:34.254 Read (02h): Supported 00:26:34.254 Compare (05h): Supported 00:26:34.254 Write Zeroes (08h): Supported LBA-Change 00:26:34.254 Dataset Management (09h): Supported LBA-Change 00:26:34.254 Unknown (0Ch): Supported 00:26:34.254 Unknown (12h): Supported 00:26:34.254 Copy (19h): Supported LBA-Change 00:26:34.254 Unknown (1Dh): Supported LBA-Change 00:26:34.254 00:26:34.254 Error Log 00:26:34.254 ========= 00:26:34.254 00:26:34.254 Arbitration 00:26:34.254 =========== 00:26:34.254 Arbitration Burst: no limit 00:26:34.254 00:26:34.254 Power Management 00:26:34.254 ================ 00:26:34.254 Number of Power States: 1 00:26:34.254 Current Power State: Power State #0 00:26:34.254 Power State #0: 00:26:34.254 Max Power: 25.00 W 00:26:34.254 Non-Operational State: Operational 00:26:34.254 Entry Latency: 16 microseconds 00:26:34.254 Exit Latency: 4 microseconds 00:26:34.254 Relative Read Throughput: 0 00:26:34.254 Relative Read Latency: 0 00:26:34.254 Relative Write Throughput: 0 00:26:34.254 Relative Write Latency: 0 00:26:34.254 Idle Power: Not Reported 00:26:34.254 Active Power: Not Reported 00:26:34.254 Non-Operational Permissive Mode: Not Supported 00:26:34.254 00:26:34.254 Health Information 00:26:34.254 ================== 00:26:34.254 Critical Warnings: 00:26:34.254 Available Spare Space: OK 00:26:34.254 Temperature: OK 00:26:34.254 Device Reliability: OK 00:26:34.254 Read Only: No 00:26:34.254 Volatile Memory Backup: OK 00:26:34.254 Current Temperature: 323 Kelvin (50 Celsius) 00:26:34.255 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:34.255 Available Spare: 0% 00:26:34.255 Available Spare Threshold: 0% 00:26:34.255 Life Percentage Used: 0% 00:26:34.255 Data Units Read: 7829 00:26:34.255 Data Units Written: 3818 00:26:34.255 Host Read Commands: 367743 00:26:34.255 Host Write Commands: 199214 00:26:34.255 Controller Busy Time: 0 minutes 00:26:34.255 Power Cycles: 0 00:26:34.255 Power On Hours: 0 hours 00:26:34.255 Unsafe Shutdowns: 0 00:26:34.255 Unrecoverable Media Errors: 0 00:26:34.255 Lifetime Error Log Entries: 0 00:26:34.255 Warning Temperature Time: 0 minutes 00:26:34.255 Critical Temperature Time: 0 minutes 00:26:34.255 00:26:34.255 Number of Queues 00:26:34.255 ================ 00:26:34.255 Number of I/O Submission Queues: 64 00:26:34.255 Number of I/O Completion Queues: 64 00:26:34.255 00:26:34.255 ZNS Specific Controller Data 00:26:34.255 ============================ 00:26:34.255 Zone Append Size Limit: 0 00:26:34.255 00:26:34.255 00:26:34.255 Active Namespaces 00:26:34.255 ================= 00:26:34.255 Namespace ID:1 00:26:34.255 Error Recovery Timeout: Unlimited 00:26:34.255 Command Set Identifier: NVM (00h) 00:26:34.255 Deallocate: Supported 00:26:34.255 Deallocated/Unwritten Error: Supported 00:26:34.255 Deallocated Read Value: All 0x00 00:26:34.255 Deallocate in Write Zeroes: Not Supported 00:26:34.255 Deallocated Guard Field: 0xFFFF 00:26:34.255 Flush: Supported 00:26:34.255 Reservation: Not Supported 00:26:34.255 Namespace Sharing Capabilities: Private 00:26:34.255 Size (in LBAs): 1310720 (5GiB) 00:26:34.255 Capacity (in LBAs): 1310720 (5GiB) 00:26:34.255 Utilization (in LBAs): 1310720 (5GiB) 00:26:34.255 Thin Provisioning: Not Supported 00:26:34.255 Per-NS Atomic Units: No 00:26:34.255 Maximum Single Source Range Length: 128 00:26:34.255 Maximum Copy Length: 128 00:26:34.255 Maximum Source Range Count: 128 00:26:34.255 NGUID/EUI64 Never Reused: No 00:26:34.255 Namespace Write Protected: No 00:26:34.255 Number of LBA Formats: 8 00:26:34.255 Current LBA Format: LBA Format #04 00:26:34.255 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:34.255 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:34.255 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:34.255 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:34.255 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:34.255 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:34.255 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:34.255 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:34.255 00:26:34.255 21:09:02 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:26:34.255 21:09:02 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:34.514 ===================================================== 00:26:34.514 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:34.514 ===================================================== 00:26:34.514 Controller Capabilities/Features 00:26:34.514 ================================ 00:26:34.514 Vendor ID: 1b36 00:26:34.514 Subsystem Vendor ID: 1af4 00:26:34.514 Serial Number: 12340 00:26:34.514 Model Number: QEMU NVMe Ctrl 00:26:34.514 Firmware Version: 8.0.0 00:26:34.514 Recommended Arb Burst: 6 00:26:34.514 IEEE OUI Identifier: 00 54 52 00:26:34.514 Multi-path I/O 00:26:34.514 May have multiple subsystem ports: No 00:26:34.514 May have multiple controllers: No 00:26:34.514 Associated with SR-IOV VF: No 00:26:34.514 Max Data Transfer Size: 524288 00:26:34.514 Max Number of Namespaces: 256 00:26:34.514 Max Number of I/O Queues: 64 00:26:34.514 NVMe Specification Version (VS): 1.4 00:26:34.514 NVMe Specification Version (Identify): 1.4 00:26:34.514 Maximum Queue Entries: 2048 00:26:34.514 Contiguous Queues Required: Yes 00:26:34.514 Arbitration Mechanisms Supported 00:26:34.514 Weighted Round Robin: Not Supported 00:26:34.515 Vendor Specific: Not Supported 00:26:34.515 Reset Timeout: 7500 ms 00:26:34.515 Doorbell Stride: 4 bytes 00:26:34.515 NVM Subsystem Reset: Not Supported 00:26:34.515 Command Sets Supported 00:26:34.515 NVM Command Set: Supported 00:26:34.515 Boot Partition: Not Supported 00:26:34.515 Memory Page Size Minimum: 4096 bytes 00:26:34.515 Memory Page Size Maximum: 65536 bytes 00:26:34.515 Persistent Memory Region: Not Supported 00:26:34.515 Optional Asynchronous Events Supported 00:26:34.515 Namespace Attribute Notices: Supported 00:26:34.515 Firmware Activation Notices: Not Supported 00:26:34.515 ANA Change Notices: Not Supported 00:26:34.515 PLE Aggregate Log Change Notices: Not Supported 00:26:34.515 LBA Status Info Alert Notices: Not Supported 00:26:34.515 EGE Aggregate Log Change Notices: Not Supported 00:26:34.515 Normal NVM Subsystem Shutdown event: Not Supported 00:26:34.515 Zone Descriptor Change Notices: Not Supported 00:26:34.515 Discovery Log Change Notices: Not Supported 00:26:34.515 Controller Attributes 00:26:34.515 128-bit Host Identifier: Not Supported 00:26:34.515 Non-Operational Permissive Mode: Not Supported 00:26:34.515 NVM Sets: Not Supported 00:26:34.515 Read Recovery Levels: Not Supported 00:26:34.515 Endurance Groups: Not Supported 00:26:34.515 Predictable Latency Mode: Not Supported 00:26:34.515 Traffic Based Keep ALive: Not Supported 00:26:34.515 Namespace Granularity: Not Supported 00:26:34.515 SQ Associations: Not Supported 00:26:34.515 UUID List: Not Supported 00:26:34.515 Multi-Domain Subsystem: Not Supported 00:26:34.515 Fixed Capacity Management: Not Supported 00:26:34.515 Variable Capacity Management: Not Supported 00:26:34.515 Delete Endurance Group: Not Supported 00:26:34.515 Delete NVM Set: Not Supported 00:26:34.515 Extended LBA Formats Supported: Supported 00:26:34.515 Flexible Data Placement Supported: Not Supported 00:26:34.515 00:26:34.515 Controller Memory Buffer Support 00:26:34.515 ================================ 00:26:34.515 Supported: No 00:26:34.515 00:26:34.515 Persistent Memory Region Support 00:26:34.515 ================================ 00:26:34.515 Supported: No 00:26:34.515 00:26:34.515 Admin Command Set Attributes 00:26:34.515 ============================ 00:26:34.515 Security Send/Receive: Not Supported 00:26:34.515 Format NVM: Supported 00:26:34.515 Firmware Activate/Download: Not Supported 00:26:34.515 Namespace Management: Supported 00:26:34.515 Device Self-Test: Not Supported 00:26:34.515 Directives: Supported 00:26:34.515 NVMe-MI: Not Supported 00:26:34.515 Virtualization Management: Not Supported 00:26:34.515 Doorbell Buffer Config: Supported 00:26:34.515 Get LBA Status Capability: Not Supported 00:26:34.515 Command & Feature Lockdown Capability: Not Supported 00:26:34.515 Abort Command Limit: 4 00:26:34.515 Async Event Request Limit: 4 00:26:34.515 Number of Firmware Slots: N/A 00:26:34.515 Firmware Slot 1 Read-Only: N/A 00:26:34.515 Firmware Activation Without Reset: N/A 00:26:34.515 Multiple Update Detection Support: N/A 00:26:34.515 Firmware Update Granularity: No Information Provided 00:26:34.515 Per-Namespace SMART Log: Yes 00:26:34.515 Asymmetric Namespace Access Log Page: Not Supported 00:26:34.515 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:26:34.515 Command Effects Log Page: Supported 00:26:34.515 Get Log Page Extended Data: Supported 00:26:34.515 Telemetry Log Pages: Not Supported 00:26:34.515 Persistent Event Log Pages: Not Supported 00:26:34.515 Supported Log Pages Log Page: May Support 00:26:34.515 Commands Supported & Effects Log Page: Not Supported 00:26:34.515 Feature Identifiers & Effects Log Page:May Support 00:26:34.515 NVMe-MI Commands & Effects Log Page: May Support 00:26:34.515 Data Area 4 for Telemetry Log: Not Supported 00:26:34.515 Error Log Page Entries Supported: 1 00:26:34.515 Keep Alive: Not Supported 00:26:34.515 00:26:34.515 NVM Command Set Attributes 00:26:34.515 ========================== 00:26:34.515 Submission Queue Entry Size 00:26:34.515 Max: 64 00:26:34.515 Min: 64 00:26:34.515 Completion Queue Entry Size 00:26:34.515 Max: 16 00:26:34.515 Min: 16 00:26:34.515 Number of Namespaces: 256 00:26:34.515 Compare Command: Supported 00:26:34.515 Write Uncorrectable Command: Not Supported 00:26:34.515 Dataset Management Command: Supported 00:26:34.515 Write Zeroes Command: Supported 00:26:34.515 Set Features Save Field: Supported 00:26:34.515 Reservations: Not Supported 00:26:34.515 Timestamp: Supported 00:26:34.515 Copy: Supported 00:26:34.515 Volatile Write Cache: Present 00:26:34.515 Atomic Write Unit (Normal): 1 00:26:34.515 Atomic Write Unit (PFail): 1 00:26:34.515 Atomic Compare & Write Unit: 1 00:26:34.515 Fused Compare & Write: Not Supported 00:26:34.515 Scatter-Gather List 00:26:34.515 SGL Command Set: Supported 00:26:34.515 SGL Keyed: Not Supported 00:26:34.515 SGL Bit Bucket Descriptor: Not Supported 00:26:34.515 SGL Metadata Pointer: Not Supported 00:26:34.515 Oversized SGL: Not Supported 00:26:34.515 SGL Metadata Address: Not Supported 00:26:34.515 SGL Offset: Not Supported 00:26:34.515 Transport SGL Data Block: Not Supported 00:26:34.515 Replay Protected Memory Block: Not Supported 00:26:34.515 00:26:34.515 Firmware Slot Information 00:26:34.515 ========================= 00:26:34.515 Active slot: 1 00:26:34.515 Slot 1 Firmware Revision: 1.0 00:26:34.515 00:26:34.515 00:26:34.515 Commands Supported and Effects 00:26:34.515 ============================== 00:26:34.515 Admin Commands 00:26:34.515 -------------- 00:26:34.515 Delete I/O Submission Queue (00h): Supported 00:26:34.515 Create I/O Submission Queue (01h): Supported 00:26:34.515 Get Log Page (02h): Supported 00:26:34.515 Delete I/O Completion Queue (04h): Supported 00:26:34.515 Create I/O Completion Queue (05h): Supported 00:26:34.515 Identify (06h): Supported 00:26:34.515 Abort (08h): Supported 00:26:34.515 Set Features (09h): Supported 00:26:34.515 Get Features (0Ah): Supported 00:26:34.515 Asynchronous Event Request (0Ch): Supported 00:26:34.515 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:34.515 Directive Send (19h): Supported 00:26:34.515 Directive Receive (1Ah): Supported 00:26:34.515 Virtualization Management (1Ch): Supported 00:26:34.515 Doorbell Buffer Config (7Ch): Supported 00:26:34.515 Format NVM (80h): Supported LBA-Change 00:26:34.515 I/O Commands 00:26:34.515 ------------ 00:26:34.515 Flush (00h): Supported LBA-Change 00:26:34.515 Write (01h): Supported LBA-Change 00:26:34.515 Read (02h): Supported 00:26:34.515 Compare (05h): Supported 00:26:34.515 Write Zeroes (08h): Supported LBA-Change 00:26:34.515 Dataset Management (09h): Supported LBA-Change 00:26:34.515 Unknown (0Ch): Supported 00:26:34.515 Unknown (12h): Supported 00:26:34.515 Copy (19h): Supported LBA-Change 00:26:34.515 Unknown (1Dh): Supported LBA-Change 00:26:34.515 00:26:34.515 Error Log 00:26:34.515 ========= 00:26:34.515 00:26:34.516 Arbitration 00:26:34.516 =========== 00:26:34.516 Arbitration Burst: no limit 00:26:34.516 00:26:34.516 Power Management 00:26:34.516 ================ 00:26:34.516 Number of Power States: 1 00:26:34.516 Current Power State: Power State #0 00:26:34.516 Power State #0: 00:26:34.516 Max Power: 25.00 W 00:26:34.516 Non-Operational State: Operational 00:26:34.516 Entry Latency: 16 microseconds 00:26:34.516 Exit Latency: 4 microseconds 00:26:34.516 Relative Read Throughput: 0 00:26:34.516 Relative Read Latency: 0 00:26:34.516 Relative Write Throughput: 0 00:26:34.516 Relative Write Latency: 0 00:26:34.516 Idle Power: Not Reported 00:26:34.516 Active Power: Not Reported 00:26:34.516 Non-Operational Permissive Mode: Not Supported 00:26:34.516 00:26:34.516 Health Information 00:26:34.516 ================== 00:26:34.516 Critical Warnings: 00:26:34.516 Available Spare Space: OK 00:26:34.516 Temperature: OK 00:26:34.516 Device Reliability: OK 00:26:34.516 Read Only: No 00:26:34.516 Volatile Memory Backup: OK 00:26:34.516 Current Temperature: 323 Kelvin (50 Celsius) 00:26:34.516 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:34.516 Available Spare: 0% 00:26:34.516 Available Spare Threshold: 0% 00:26:34.516 Life Percentage Used: 0% 00:26:34.516 Data Units Read: 7829 00:26:34.516 Data Units Written: 3818 00:26:34.516 Host Read Commands: 367743 00:26:34.516 Host Write Commands: 199214 00:26:34.516 Controller Busy Time: 0 minutes 00:26:34.516 Power Cycles: 0 00:26:34.516 Power On Hours: 0 hours 00:26:34.516 Unsafe Shutdowns: 0 00:26:34.516 Unrecoverable Media Errors: 0 00:26:34.516 Lifetime Error Log Entries: 0 00:26:34.516 Warning Temperature Time: 0 minutes 00:26:34.516 Critical Temperature Time: 0 minutes 00:26:34.516 00:26:34.516 Number of Queues 00:26:34.516 ================ 00:26:34.516 Number of I/O Submission Queues: 64 00:26:34.516 Number of I/O Completion Queues: 64 00:26:34.516 00:26:34.516 ZNS Specific Controller Data 00:26:34.516 ============================ 00:26:34.516 Zone Append Size Limit: 0 00:26:34.516 00:26:34.516 00:26:34.516 Active Namespaces 00:26:34.516 ================= 00:26:34.516 Namespace ID:1 00:26:34.516 Error Recovery Timeout: Unlimited 00:26:34.516 Command Set Identifier: NVM (00h) 00:26:34.516 Deallocate: Supported 00:26:34.516 Deallocated/Unwritten Error: Supported 00:26:34.516 Deallocated Read Value: All 0x00 00:26:34.516 Deallocate in Write Zeroes: Not Supported 00:26:34.516 Deallocated Guard Field: 0xFFFF 00:26:34.516 Flush: Supported 00:26:34.516 Reservation: Not Supported 00:26:34.516 Namespace Sharing Capabilities: Private 00:26:34.516 Size (in LBAs): 1310720 (5GiB) 00:26:34.516 Capacity (in LBAs): 1310720 (5GiB) 00:26:34.516 Utilization (in LBAs): 1310720 (5GiB) 00:26:34.516 Thin Provisioning: Not Supported 00:26:34.516 Per-NS Atomic Units: No 00:26:34.516 Maximum Single Source Range Length: 128 00:26:34.516 Maximum Copy Length: 128 00:26:34.516 Maximum Source Range Count: 128 00:26:34.516 NGUID/EUI64 Never Reused: No 00:26:34.516 Namespace Write Protected: No 00:26:34.516 Number of LBA Formats: 8 00:26:34.516 Current LBA Format: LBA Format #04 00:26:34.516 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:34.516 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:34.516 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:34.516 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:34.516 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:34.516 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:34.516 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:34.516 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:34.516 00:26:34.516 00:26:34.516 real 0m0.664s 00:26:34.516 user 0m0.273s 00:26:34.516 sys 0m0.277s 00:26:34.516 21:09:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.516 21:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.516 ************************************ 00:26:34.516 END TEST nvme_identify 00:26:34.516 ************************************ 00:26:34.516 21:09:02 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:26:34.516 21:09:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:34.516 21:09:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:34.516 21:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:34.516 ************************************ 00:26:34.516 START TEST nvme_perf 00:26:34.516 ************************************ 00:26:34.516 21:09:02 -- common/autotest_common.sh@1104 -- # nvme_perf 00:26:34.516 21:09:02 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:26:35.894 Initializing NVMe Controllers 00:26:35.894 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:35.894 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:26:35.894 Initialization complete. Launching workers. 00:26:35.894 ======================================================== 00:26:35.894 Latency(us) 00:26:35.894 Device Information : IOPS MiB/s Average min max 00:26:35.894 PCIE (0000:00:06.0) NSID 1 from core 0: 54656.00 640.50 2342.15 1301.36 7041.82 00:26:35.894 ======================================================== 00:26:35.894 Total : 54656.00 640.50 2342.15 1301.36 7041.82 00:26:35.894 00:26:35.894 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:35.894 ================================================================================= 00:26:35.894 1.00000% : 1437.324us 00:26:35.894 10.00000% : 1638.400us 00:26:35.894 25.00000% : 1891.607us 00:26:35.894 50.00000% : 2323.549us 00:26:35.894 75.00000% : 2740.596us 00:26:35.894 90.00000% : 3038.487us 00:26:35.894 95.00000% : 3321.484us 00:26:35.894 98.00000% : 3544.902us 00:26:35.894 99.00000% : 3708.742us 00:26:35.894 99.50000% : 4051.316us 00:26:35.894 99.90000% : 5362.036us 00:26:35.894 99.99000% : 6881.280us 00:26:35.894 99.99900% : 7060.015us 00:26:35.894 99.99990% : 7060.015us 00:26:35.894 99.99999% : 7060.015us 00:26:35.894 00:26:35.894 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:35.894 ============================================================================== 00:26:35.894 Range in us Cumulative IO count 00:26:35.894 1295.825 - 1303.273: 0.0018% ( 1) 00:26:35.894 1303.273 - 1310.720: 0.0037% ( 1) 00:26:35.894 1310.720 - 1318.167: 0.0146% ( 6) 00:26:35.894 1318.167 - 1325.615: 0.0220% ( 4) 00:26:35.894 1325.615 - 1333.062: 0.0311% ( 5) 00:26:35.894 1333.062 - 1340.509: 0.0512% ( 11) 00:26:35.894 1340.509 - 1347.956: 0.0604% ( 5) 00:26:35.894 1347.956 - 1355.404: 0.0787% ( 10) 00:26:35.894 1355.404 - 1362.851: 0.1061% ( 15) 00:26:35.894 1362.851 - 1370.298: 0.1445% ( 21) 00:26:35.894 1370.298 - 1377.745: 0.1903% ( 25) 00:26:35.894 1377.745 - 1385.193: 0.2543% ( 35) 00:26:35.894 1385.193 - 1392.640: 0.3293% ( 41) 00:26:35.894 1392.640 - 1400.087: 0.4226% ( 51) 00:26:35.894 1400.087 - 1407.535: 0.5397% ( 64) 00:26:35.894 1407.535 - 1414.982: 0.6916% ( 83) 00:26:35.894 1414.982 - 1422.429: 0.8398% ( 81) 00:26:35.894 1422.429 - 1429.876: 0.9917% ( 83) 00:26:35.894 1429.876 - 1437.324: 1.1636% ( 94) 00:26:35.894 1437.324 - 1444.771: 1.3612% ( 108) 00:26:35.894 1444.771 - 1452.218: 1.5460% ( 101) 00:26:35.894 1452.218 - 1459.665: 1.7272% ( 99) 00:26:35.894 1459.665 - 1467.113: 1.9394% ( 116) 00:26:35.894 1467.113 - 1474.560: 2.1699% ( 126) 00:26:35.894 1474.560 - 1482.007: 2.4206% ( 137) 00:26:35.894 1482.007 - 1489.455: 2.6932% ( 149) 00:26:35.894 1489.455 - 1496.902: 2.9494% ( 140) 00:26:35.894 1496.902 - 1504.349: 3.2750% ( 178) 00:26:35.895 1504.349 - 1511.796: 3.5787% ( 166) 00:26:35.895 1511.796 - 1519.244: 3.8934% ( 172) 00:26:35.895 1519.244 - 1526.691: 4.2173% ( 177) 00:26:35.895 1526.691 - 1534.138: 4.5430% ( 178) 00:26:35.895 1534.138 - 1541.585: 4.9107% ( 201) 00:26:35.895 1541.585 - 1549.033: 5.2730% ( 198) 00:26:35.895 1549.033 - 1556.480: 5.6261% ( 193) 00:26:35.895 1556.480 - 1563.927: 6.0250% ( 218) 00:26:35.895 1563.927 - 1571.375: 6.4238% ( 218) 00:26:35.895 1571.375 - 1578.822: 6.8025% ( 207) 00:26:35.895 1578.822 - 1586.269: 7.2032% ( 219) 00:26:35.895 1586.269 - 1593.716: 7.5966% ( 215) 00:26:35.895 1593.716 - 1601.164: 7.9845% ( 212) 00:26:35.895 1601.164 - 1608.611: 8.3907% ( 222) 00:26:35.895 1608.611 - 1616.058: 8.8261% ( 238) 00:26:35.895 1616.058 - 1623.505: 9.2488% ( 231) 00:26:35.895 1623.505 - 1630.953: 9.6934% ( 243) 00:26:35.895 1630.953 - 1638.400: 10.1105% ( 228) 00:26:35.895 1638.400 - 1645.847: 10.5460% ( 238) 00:26:35.895 1645.847 - 1653.295: 10.9997% ( 248) 00:26:35.895 1653.295 - 1660.742: 11.4205% ( 230) 00:26:35.895 1660.742 - 1668.189: 11.8651% ( 243) 00:26:35.895 1668.189 - 1675.636: 12.2896% ( 232) 00:26:35.895 1675.636 - 1683.084: 12.7086% ( 229) 00:26:35.895 1683.084 - 1690.531: 13.1587% ( 246) 00:26:35.895 1690.531 - 1697.978: 13.5959% ( 239) 00:26:35.895 1697.978 - 1705.425: 14.0040% ( 223) 00:26:35.895 1705.425 - 1712.873: 14.4778% ( 259) 00:26:35.895 1712.873 - 1720.320: 14.9151% ( 239) 00:26:35.895 1720.320 - 1727.767: 15.3560% ( 241) 00:26:35.895 1727.767 - 1735.215: 15.8153% ( 251) 00:26:35.895 1735.215 - 1742.662: 16.2361% ( 230) 00:26:35.895 1742.662 - 1750.109: 16.6953% ( 251) 00:26:35.895 1750.109 - 1757.556: 17.1655% ( 257) 00:26:35.895 1757.556 - 1765.004: 17.5772% ( 225) 00:26:35.895 1765.004 - 1772.451: 18.0072% ( 235) 00:26:35.895 1772.451 - 1779.898: 18.4774% ( 257) 00:26:35.895 1779.898 - 1787.345: 18.9019% ( 232) 00:26:35.895 1787.345 - 1794.793: 19.3556% ( 248) 00:26:35.895 1794.793 - 1802.240: 19.7929% ( 239) 00:26:35.895 1802.240 - 1809.687: 20.2576% ( 254) 00:26:35.895 1809.687 - 1817.135: 20.6894% ( 236) 00:26:35.895 1817.135 - 1824.582: 21.1413% ( 247) 00:26:35.895 1824.582 - 1832.029: 21.5658% ( 232) 00:26:35.895 1832.029 - 1839.476: 22.0195% ( 248) 00:26:35.895 1839.476 - 1846.924: 22.4550% ( 238) 00:26:35.895 1846.924 - 1854.371: 22.9014% ( 244) 00:26:35.895 1854.371 - 1861.818: 23.3332% ( 236) 00:26:35.895 1861.818 - 1869.265: 23.7650% ( 236) 00:26:35.895 1869.265 - 1876.713: 24.2352% ( 257) 00:26:35.895 1876.713 - 1884.160: 24.6725% ( 239) 00:26:35.895 1884.160 - 1891.607: 25.1025% ( 235) 00:26:35.895 1891.607 - 1899.055: 25.5562% ( 248) 00:26:35.895 1899.055 - 1906.502: 25.9862% ( 235) 00:26:35.895 1906.502 - 1921.396: 26.8827% ( 490) 00:26:35.895 1921.396 - 1936.291: 27.7774% ( 489) 00:26:35.895 1936.291 - 1951.185: 28.6410% ( 472) 00:26:35.895 1951.185 - 1966.080: 29.5302% ( 486) 00:26:35.895 1966.080 - 1980.975: 30.4230% ( 488) 00:26:35.895 1980.975 - 1995.869: 31.2957% ( 477) 00:26:35.895 1995.869 - 2010.764: 32.1703% ( 478) 00:26:35.895 2010.764 - 2025.658: 33.0430% ( 477) 00:26:35.895 2025.658 - 2040.553: 33.9066% ( 472) 00:26:35.895 2040.553 - 2055.447: 34.7922% ( 484) 00:26:35.895 2055.447 - 2070.342: 35.6814% ( 486) 00:26:35.895 2070.342 - 2085.236: 36.5577% ( 479) 00:26:35.895 2085.236 - 2100.131: 37.4451% ( 485) 00:26:35.895 2100.131 - 2115.025: 38.3087% ( 472) 00:26:35.895 2115.025 - 2129.920: 39.1887% ( 481) 00:26:35.895 2129.920 - 2144.815: 40.0779% ( 486) 00:26:35.895 2144.815 - 2159.709: 40.9598% ( 482) 00:26:35.895 2159.709 - 2174.604: 41.8472% ( 485) 00:26:35.895 2174.604 - 2189.498: 42.7419% ( 489) 00:26:35.895 2189.498 - 2204.393: 43.6329% ( 487) 00:26:35.895 2204.393 - 2219.287: 44.4983% ( 473) 00:26:35.895 2219.287 - 2234.182: 45.4040% ( 495) 00:26:35.895 2234.182 - 2249.076: 46.3005% ( 490) 00:26:35.895 2249.076 - 2263.971: 47.1824% ( 482) 00:26:35.895 2263.971 - 2278.865: 48.0661% ( 483) 00:26:35.895 2278.865 - 2293.760: 48.9388% ( 477) 00:26:35.895 2293.760 - 2308.655: 49.8115% ( 477) 00:26:35.895 2308.655 - 2323.549: 50.7026% ( 487) 00:26:35.895 2323.549 - 2338.444: 51.5808% ( 480) 00:26:35.895 2338.444 - 2353.338: 52.4663% ( 484) 00:26:35.895 2353.338 - 2368.233: 53.3574% ( 487) 00:26:35.895 2368.233 - 2383.127: 54.2466% ( 486) 00:26:35.895 2383.127 - 2398.022: 55.1339% ( 485) 00:26:35.895 2398.022 - 2412.916: 55.9792% ( 462) 00:26:35.895 2412.916 - 2427.811: 56.8940% ( 500) 00:26:35.895 2427.811 - 2442.705: 57.7979% ( 494) 00:26:35.895 2442.705 - 2457.600: 58.6834% ( 484) 00:26:35.895 2457.600 - 2472.495: 59.5506% ( 474) 00:26:35.895 2472.495 - 2487.389: 60.4563% ( 495) 00:26:35.895 2487.389 - 2502.284: 61.3254% ( 475) 00:26:35.896 2502.284 - 2517.178: 62.2164% ( 487) 00:26:35.896 2517.178 - 2532.073: 63.1074% ( 487) 00:26:35.896 2532.073 - 2546.967: 63.9875% ( 481) 00:26:35.896 2546.967 - 2561.862: 64.8511% ( 472) 00:26:35.896 2561.862 - 2576.756: 65.7659% ( 500) 00:26:35.896 2576.756 - 2591.651: 66.6569% ( 487) 00:26:35.896 2591.651 - 2606.545: 67.5424% ( 484) 00:26:35.896 2606.545 - 2621.440: 68.4298% ( 485) 00:26:35.896 2621.440 - 2636.335: 69.3666% ( 512) 00:26:35.896 2636.335 - 2651.229: 70.2594% ( 488) 00:26:35.896 2651.229 - 2666.124: 71.1194% ( 470) 00:26:35.896 2666.124 - 2681.018: 72.0305% ( 498) 00:26:35.896 2681.018 - 2695.913: 72.9197% ( 486) 00:26:35.896 2695.913 - 2710.807: 73.8053% ( 484) 00:26:35.896 2710.807 - 2725.702: 74.7018% ( 490) 00:26:35.896 2725.702 - 2740.596: 75.6257% ( 505) 00:26:35.896 2740.596 - 2755.491: 76.5168% ( 487) 00:26:35.896 2755.491 - 2770.385: 77.3968% ( 481) 00:26:35.896 2770.385 - 2785.280: 78.2952% ( 491) 00:26:35.896 2785.280 - 2800.175: 79.2063% ( 498) 00:26:35.896 2800.175 - 2815.069: 80.1175% ( 498) 00:26:35.896 2815.069 - 2829.964: 80.9865% ( 475) 00:26:35.896 2829.964 - 2844.858: 81.8702% ( 483) 00:26:35.896 2844.858 - 2859.753: 82.7448% ( 478) 00:26:35.896 2859.753 - 2874.647: 83.6011% ( 468) 00:26:35.896 2874.647 - 2889.542: 84.4153% ( 445) 00:26:35.896 2889.542 - 2904.436: 85.2294% ( 445) 00:26:35.896 2904.436 - 2919.331: 85.9814% ( 411) 00:26:35.896 2919.331 - 2934.225: 86.6803% ( 382) 00:26:35.896 2934.225 - 2949.120: 87.3463% ( 364) 00:26:35.896 2949.120 - 2964.015: 87.9684% ( 340) 00:26:35.896 2964.015 - 2978.909: 88.5264% ( 305) 00:26:35.896 2978.909 - 2993.804: 89.0296% ( 275) 00:26:35.896 2993.804 - 3008.698: 89.5309% ( 274) 00:26:35.896 3008.698 - 3023.593: 89.9700% ( 240) 00:26:35.896 3023.593 - 3038.487: 90.3780% ( 223) 00:26:35.896 3038.487 - 3053.382: 90.7769% ( 218) 00:26:35.896 3053.382 - 3068.276: 91.1410% ( 199) 00:26:35.896 3068.276 - 3083.171: 91.4410% ( 164) 00:26:35.896 3083.171 - 3098.065: 91.7136% ( 149) 00:26:35.896 3098.065 - 3112.960: 91.9954% ( 154) 00:26:35.896 3112.960 - 3127.855: 92.2497% ( 139) 00:26:35.896 3127.855 - 3142.749: 92.4949% ( 134) 00:26:35.896 3142.749 - 3157.644: 92.7291% ( 128) 00:26:35.896 3157.644 - 3172.538: 92.9706% ( 132) 00:26:35.896 3172.538 - 3187.433: 93.2029% ( 127) 00:26:35.896 3187.433 - 3202.327: 93.4298% ( 124) 00:26:35.896 3202.327 - 3217.222: 93.6347% ( 112) 00:26:35.896 3217.222 - 3232.116: 93.8579% ( 122) 00:26:35.896 3232.116 - 3247.011: 94.0647% ( 113) 00:26:35.896 3247.011 - 3261.905: 94.2733% ( 114) 00:26:35.896 3261.905 - 3276.800: 94.4727% ( 109) 00:26:35.896 3276.800 - 3291.695: 94.6794% ( 113) 00:26:35.896 3291.695 - 3306.589: 94.8862% ( 113) 00:26:35.896 3306.589 - 3321.484: 95.0875% ( 110) 00:26:35.896 3321.484 - 3336.378: 95.2869% ( 109) 00:26:35.896 3336.378 - 3351.273: 95.4973% ( 115) 00:26:35.896 3351.273 - 3366.167: 95.7022% ( 112) 00:26:35.896 3366.167 - 3381.062: 95.9035% ( 110) 00:26:35.896 3381.062 - 3395.956: 96.1011% ( 108) 00:26:35.896 3395.956 - 3410.851: 96.2987% ( 108) 00:26:35.896 3410.851 - 3425.745: 96.5109% ( 116) 00:26:35.896 3425.745 - 3440.640: 96.7213% ( 115) 00:26:35.896 3440.640 - 3455.535: 96.9226% ( 110) 00:26:35.896 3455.535 - 3470.429: 97.1238% ( 110) 00:26:35.896 3470.429 - 3485.324: 97.3196% ( 107) 00:26:35.896 3485.324 - 3500.218: 97.4971% ( 97) 00:26:35.896 3500.218 - 3515.113: 97.6874% ( 104) 00:26:35.896 3515.113 - 3530.007: 97.8648% ( 97) 00:26:35.896 3530.007 - 3544.902: 98.0295% ( 90) 00:26:35.896 3544.902 - 3559.796: 98.1740% ( 79) 00:26:35.896 3559.796 - 3574.691: 98.3186% ( 79) 00:26:35.896 3574.691 - 3589.585: 98.4375% ( 65) 00:26:35.896 3589.585 - 3604.480: 98.5528% ( 63) 00:26:35.896 3604.480 - 3619.375: 98.6552% ( 56) 00:26:35.896 3619.375 - 3634.269: 98.7357% ( 44) 00:26:35.896 3634.269 - 3649.164: 98.8071% ( 39) 00:26:35.896 3649.164 - 3664.058: 98.8711% ( 35) 00:26:35.896 3664.058 - 3678.953: 98.9224% ( 28) 00:26:35.896 3678.953 - 3693.847: 98.9736% ( 28) 00:26:35.896 3693.847 - 3708.742: 99.0083% ( 19) 00:26:35.896 3708.742 - 3723.636: 99.0486% ( 22) 00:26:35.896 3723.636 - 3738.531: 99.0724% ( 13) 00:26:35.896 3738.531 - 3753.425: 99.1017% ( 16) 00:26:35.896 3753.425 - 3768.320: 99.1273% ( 14) 00:26:35.896 3768.320 - 3783.215: 99.1492% ( 12) 00:26:35.896 3783.215 - 3798.109: 99.1748% ( 14) 00:26:35.896 3798.109 - 3813.004: 99.1931% ( 10) 00:26:35.896 3813.004 - 3842.793: 99.2334% ( 22) 00:26:35.896 3842.793 - 3872.582: 99.2773% ( 24) 00:26:35.896 3872.582 - 3902.371: 99.3194% ( 23) 00:26:35.896 3902.371 - 3932.160: 99.3578% ( 21) 00:26:35.896 3932.160 - 3961.949: 99.3981% ( 22) 00:26:35.896 3961.949 - 3991.738: 99.4383% ( 22) 00:26:35.896 3991.738 - 4021.527: 99.4731% ( 19) 00:26:35.896 4021.527 - 4051.316: 99.5060% ( 18) 00:26:35.896 4051.316 - 4081.105: 99.5371% ( 17) 00:26:35.897 4081.105 - 4110.895: 99.5645% ( 15) 00:26:35.897 4110.895 - 4140.684: 99.5902% ( 14) 00:26:35.897 4140.684 - 4170.473: 99.6176% ( 15) 00:26:35.897 4170.473 - 4200.262: 99.6432% ( 14) 00:26:35.897 4200.262 - 4230.051: 99.6725% ( 16) 00:26:35.897 4230.051 - 4259.840: 99.6926% ( 11) 00:26:35.897 4259.840 - 4289.629: 99.7146% ( 12) 00:26:35.897 4289.629 - 4319.418: 99.7310% ( 9) 00:26:35.897 4319.418 - 4349.207: 99.7475% ( 9) 00:26:35.897 4349.207 - 4378.996: 99.7603% ( 7) 00:26:35.897 4378.996 - 4408.785: 99.7695% ( 5) 00:26:35.897 4408.785 - 4438.575: 99.7786% ( 5) 00:26:35.897 4438.575 - 4468.364: 99.7859% ( 4) 00:26:35.897 4468.364 - 4498.153: 99.7933% ( 4) 00:26:35.897 4498.153 - 4527.942: 99.7987% ( 3) 00:26:35.897 4527.942 - 4557.731: 99.8042% ( 3) 00:26:35.897 4557.731 - 4587.520: 99.8079% ( 2) 00:26:35.897 4587.520 - 4617.309: 99.8115% ( 2) 00:26:35.897 4617.309 - 4647.098: 99.8152% ( 2) 00:26:35.897 4647.098 - 4676.887: 99.8189% ( 2) 00:26:35.897 4676.887 - 4706.676: 99.8244% ( 3) 00:26:35.897 4706.676 - 4736.465: 99.8280% ( 2) 00:26:35.897 4736.465 - 4766.255: 99.8317% ( 2) 00:26:35.897 4766.255 - 4796.044: 99.8353% ( 2) 00:26:35.897 4796.044 - 4825.833: 99.8372% ( 1) 00:26:35.897 4825.833 - 4855.622: 99.8408% ( 2) 00:26:35.897 4855.622 - 4885.411: 99.8445% ( 2) 00:26:35.897 4885.411 - 4915.200: 99.8481% ( 2) 00:26:35.897 4915.200 - 4944.989: 99.8518% ( 2) 00:26:35.897 4944.989 - 4974.778: 99.8555% ( 2) 00:26:35.897 4974.778 - 5004.567: 99.8591% ( 2) 00:26:35.897 5004.567 - 5034.356: 99.8628% ( 2) 00:26:35.897 5034.356 - 5064.145: 99.8664% ( 2) 00:26:35.897 5064.145 - 5093.935: 99.8683% ( 1) 00:26:35.897 5093.935 - 5123.724: 99.8756% ( 4) 00:26:35.897 5123.724 - 5153.513: 99.8792% ( 2) 00:26:35.897 5153.513 - 5183.302: 99.8811% ( 1) 00:26:35.897 5183.302 - 5213.091: 99.8847% ( 2) 00:26:35.897 5213.091 - 5242.880: 99.8884% ( 2) 00:26:35.897 5242.880 - 5272.669: 99.8921% ( 2) 00:26:35.897 5272.669 - 5302.458: 99.8957% ( 2) 00:26:35.897 5302.458 - 5332.247: 99.8994% ( 2) 00:26:35.897 5332.247 - 5362.036: 99.9012% ( 1) 00:26:35.897 5362.036 - 5391.825: 99.9049% ( 2) 00:26:35.897 5391.825 - 5421.615: 99.9067% ( 1) 00:26:35.897 5421.615 - 5451.404: 99.9085% ( 1) 00:26:35.897 5451.404 - 5481.193: 99.9103% ( 1) 00:26:35.897 5481.193 - 5510.982: 99.9122% ( 1) 00:26:35.897 5510.982 - 5540.771: 99.9140% ( 1) 00:26:35.897 5540.771 - 5570.560: 99.9158% ( 1) 00:26:35.897 5570.560 - 5600.349: 99.9177% ( 1) 00:26:35.897 5600.349 - 5630.138: 99.9195% ( 1) 00:26:35.897 5630.138 - 5659.927: 99.9213% ( 1) 00:26:35.897 5689.716 - 5719.505: 99.9232% ( 1) 00:26:35.897 5719.505 - 5749.295: 99.9250% ( 1) 00:26:35.897 5749.295 - 5779.084: 99.9268% ( 1) 00:26:35.897 5779.084 - 5808.873: 99.9286% ( 1) 00:26:35.897 5808.873 - 5838.662: 99.9305% ( 1) 00:26:35.897 5838.662 - 5868.451: 99.9323% ( 1) 00:26:35.897 5868.451 - 5898.240: 99.9341% ( 1) 00:26:35.897 5898.240 - 5928.029: 99.9360% ( 1) 00:26:35.897 5928.029 - 5957.818: 99.9378% ( 1) 00:26:35.897 5957.818 - 5987.607: 99.9396% ( 1) 00:26:35.897 5987.607 - 6017.396: 99.9415% ( 1) 00:26:35.897 6017.396 - 6047.185: 99.9433% ( 1) 00:26:35.897 6047.185 - 6076.975: 99.9451% ( 1) 00:26:35.897 6106.764 - 6136.553: 99.9469% ( 1) 00:26:35.897 6136.553 - 6166.342: 99.9488% ( 1) 00:26:35.897 6166.342 - 6196.131: 99.9506% ( 1) 00:26:35.897 6196.131 - 6225.920: 99.9524% ( 1) 00:26:35.897 6225.920 - 6255.709: 99.9543% ( 1) 00:26:35.897 6255.709 - 6285.498: 99.9561% ( 1) 00:26:35.897 6285.498 - 6315.287: 99.9579% ( 1) 00:26:35.897 6315.287 - 6345.076: 99.9597% ( 1) 00:26:35.897 6345.076 - 6374.865: 99.9616% ( 1) 00:26:35.897 6374.865 - 6404.655: 99.9634% ( 1) 00:26:35.897 6404.655 - 6434.444: 99.9652% ( 1) 00:26:35.897 6434.444 - 6464.233: 99.9671% ( 1) 00:26:35.897 6464.233 - 6494.022: 99.9689% ( 1) 00:26:35.897 6494.022 - 6523.811: 99.9707% ( 1) 00:26:35.897 6523.811 - 6553.600: 99.9726% ( 1) 00:26:35.897 6553.600 - 6583.389: 99.9744% ( 1) 00:26:35.897 6583.389 - 6613.178: 99.9762% ( 1) 00:26:35.897 6613.178 - 6642.967: 99.9780% ( 1) 00:26:35.897 6642.967 - 6672.756: 99.9799% ( 1) 00:26:35.897 6702.545 - 6732.335: 99.9817% ( 1) 00:26:35.897 6732.335 - 6762.124: 99.9835% ( 1) 00:26:35.897 6762.124 - 6791.913: 99.9854% ( 1) 00:26:35.897 6791.913 - 6821.702: 99.9872% ( 1) 00:26:35.897 6821.702 - 6851.491: 99.9890% ( 1) 00:26:35.897 6851.491 - 6881.280: 99.9909% ( 1) 00:26:35.897 6881.280 - 6911.069: 99.9927% ( 1) 00:26:35.897 6911.069 - 6940.858: 99.9945% ( 1) 00:26:35.897 6940.858 - 6970.647: 99.9963% ( 1) 00:26:35.897 6970.647 - 7000.436: 99.9982% ( 1) 00:26:35.897 7030.225 - 7060.015: 100.0000% ( 1) 00:26:35.897 00:26:35.897 21:09:03 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:26:37.268 Initializing NVMe Controllers 00:26:37.268 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:37.268 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:26:37.268 Initialization complete. Launching workers. 00:26:37.268 ======================================================== 00:26:37.268 Latency(us) 00:26:37.268 Device Information : IOPS MiB/s Average min max 00:26:37.268 PCIE (0000:00:06.0) NSID 1 from core 0: 65075.93 762.61 1966.85 739.32 10895.50 00:26:37.268 ======================================================== 00:26:37.268 Total : 65075.93 762.61 1966.85 739.32 10895.50 00:26:37.268 00:26:37.268 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:37.268 ================================================================================= 00:26:37.268 1.00000% : 1377.745us 00:26:37.268 10.00000% : 1601.164us 00:26:37.268 25.00000% : 1742.662us 00:26:37.268 50.00000% : 1921.396us 00:26:37.268 75.00000% : 2144.815us 00:26:37.268 90.00000% : 2368.233us 00:26:37.268 95.00000% : 2591.651us 00:26:37.268 98.00000% : 2889.542us 00:26:37.268 99.00000% : 3098.065us 00:26:37.268 99.50000% : 3291.695us 00:26:37.268 99.90000% : 5272.669us 00:26:37.268 99.99000% : 10545.338us 00:26:37.268 99.99900% : 10902.807us 00:26:37.268 99.99990% : 10902.807us 00:26:37.268 99.99999% : 10902.807us 00:26:37.268 00:26:37.268 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:37.268 ============================================================================== 00:26:37.268 Range in us Cumulative IO count 00:26:37.268 737.280 - 741.004: 0.0015% ( 1) 00:26:37.268 841.542 - 845.265: 0.0031% ( 1) 00:26:37.268 882.502 - 886.225: 0.0046% ( 1) 00:26:37.268 934.633 - 938.356: 0.0061% ( 1) 00:26:37.268 938.356 - 942.080: 0.0077% ( 1) 00:26:37.268 953.251 - 960.698: 0.0108% ( 2) 00:26:37.268 990.487 - 997.935: 0.0123% ( 1) 00:26:37.268 1005.382 - 1012.829: 0.0138% ( 1) 00:26:37.268 1012.829 - 1020.276: 0.0169% ( 2) 00:26:37.268 1020.276 - 1027.724: 0.0184% ( 1) 00:26:37.268 1035.171 - 1042.618: 0.0230% ( 3) 00:26:37.268 1050.065 - 1057.513: 0.0261% ( 2) 00:26:37.268 1057.513 - 1064.960: 0.0277% ( 1) 00:26:37.268 1072.407 - 1079.855: 0.0323% ( 3) 00:26:37.268 1079.855 - 1087.302: 0.0353% ( 2) 00:26:37.268 1087.302 - 1094.749: 0.0384% ( 2) 00:26:37.268 1094.749 - 1102.196: 0.0446% ( 4) 00:26:37.268 1102.196 - 1109.644: 0.0492% ( 3) 00:26:37.268 1117.091 - 1124.538: 0.0507% ( 1) 00:26:37.268 1124.538 - 1131.985: 0.0584% ( 5) 00:26:37.268 1131.985 - 1139.433: 0.0645% ( 4) 00:26:37.268 1139.433 - 1146.880: 0.0691% ( 3) 00:26:37.268 1146.880 - 1154.327: 0.0753% ( 4) 00:26:37.268 1154.327 - 1161.775: 0.0876% ( 8) 00:26:37.268 1161.775 - 1169.222: 0.0937% ( 4) 00:26:37.268 1169.222 - 1176.669: 0.0999% ( 4) 00:26:37.268 1176.669 - 1184.116: 0.1045% ( 3) 00:26:37.268 1184.116 - 1191.564: 0.1183% ( 9) 00:26:37.268 1191.564 - 1199.011: 0.1275% ( 6) 00:26:37.268 1199.011 - 1206.458: 0.1414% ( 9) 00:26:37.268 1206.458 - 1213.905: 0.1521% ( 7) 00:26:37.269 1213.905 - 1221.353: 0.1813% ( 19) 00:26:37.269 1221.353 - 1228.800: 0.2459% ( 42) 00:26:37.269 1228.800 - 1236.247: 0.2597% ( 9) 00:26:37.269 1236.247 - 1243.695: 0.2735% ( 9) 00:26:37.269 1243.695 - 1251.142: 0.2904% ( 11) 00:26:37.269 1251.142 - 1258.589: 0.3012% ( 7) 00:26:37.269 1258.589 - 1266.036: 0.3288% ( 18) 00:26:37.269 1266.036 - 1273.484: 0.3550% ( 17) 00:26:37.269 1273.484 - 1280.931: 0.3857% ( 20) 00:26:37.269 1280.931 - 1288.378: 0.4041% ( 12) 00:26:37.269 1288.378 - 1295.825: 0.4241% ( 13) 00:26:37.269 1295.825 - 1303.273: 0.4487% ( 16) 00:26:37.269 1303.273 - 1310.720: 0.4840% ( 23) 00:26:37.269 1310.720 - 1318.167: 0.5563% ( 47) 00:26:37.269 1318.167 - 1325.615: 0.6070% ( 33) 00:26:37.269 1325.615 - 1333.062: 0.6223% ( 10) 00:26:37.269 1333.062 - 1340.509: 0.6638% ( 27) 00:26:37.269 1340.509 - 1347.956: 0.7238% ( 39) 00:26:37.269 1347.956 - 1355.404: 0.7637% ( 26) 00:26:37.269 1355.404 - 1362.851: 0.8252% ( 40) 00:26:37.269 1362.851 - 1370.298: 0.9312% ( 69) 00:26:37.269 1370.298 - 1377.745: 1.0818% ( 98) 00:26:37.269 1377.745 - 1385.193: 1.1725% ( 59) 00:26:37.269 1385.193 - 1392.640: 1.2770% ( 68) 00:26:37.269 1392.640 - 1400.087: 1.3753% ( 64) 00:26:37.269 1400.087 - 1407.535: 1.5244% ( 97) 00:26:37.269 1407.535 - 1414.982: 1.6596% ( 88) 00:26:37.269 1414.982 - 1422.429: 1.8302% ( 111) 00:26:37.269 1422.429 - 1429.876: 2.0330% ( 132) 00:26:37.269 1429.876 - 1437.324: 2.2650% ( 151) 00:26:37.269 1437.324 - 1444.771: 2.4387% ( 113) 00:26:37.269 1444.771 - 1452.218: 2.6277% ( 123) 00:26:37.269 1452.218 - 1459.665: 2.8090% ( 118) 00:26:37.269 1459.665 - 1467.113: 3.0534% ( 159) 00:26:37.269 1467.113 - 1474.560: 3.2946% ( 157) 00:26:37.269 1474.560 - 1482.007: 3.5543% ( 169) 00:26:37.269 1482.007 - 1489.455: 3.8386% ( 185) 00:26:37.269 1489.455 - 1496.902: 4.1567% ( 207) 00:26:37.269 1496.902 - 1504.349: 4.4794% ( 210) 00:26:37.269 1504.349 - 1511.796: 4.8758% ( 258) 00:26:37.269 1511.796 - 1519.244: 5.2354% ( 234) 00:26:37.269 1519.244 - 1526.691: 5.6611% ( 277) 00:26:37.269 1526.691 - 1534.138: 6.1359% ( 309) 00:26:37.269 1534.138 - 1541.585: 6.5892% ( 295) 00:26:37.269 1541.585 - 1549.033: 7.0118% ( 275) 00:26:37.269 1549.033 - 1556.480: 7.4421% ( 280) 00:26:37.269 1556.480 - 1563.927: 7.8032% ( 235) 00:26:37.269 1563.927 - 1571.375: 8.3103% ( 330) 00:26:37.269 1571.375 - 1578.822: 8.8358% ( 342) 00:26:37.269 1578.822 - 1586.269: 9.3475% ( 333) 00:26:37.269 1586.269 - 1593.716: 9.7701% ( 275) 00:26:37.269 1593.716 - 1601.164: 10.2649% ( 322) 00:26:37.269 1601.164 - 1608.611: 10.8058% ( 352) 00:26:37.269 1608.611 - 1616.058: 11.3606% ( 361) 00:26:37.269 1616.058 - 1623.505: 11.9122% ( 359) 00:26:37.269 1623.505 - 1630.953: 12.5776% ( 433) 00:26:37.269 1630.953 - 1638.400: 13.2338% ( 427) 00:26:37.269 1638.400 - 1645.847: 13.9867% ( 490) 00:26:37.269 1645.847 - 1653.295: 14.7412% ( 491) 00:26:37.269 1653.295 - 1660.742: 15.4419% ( 456) 00:26:37.269 1660.742 - 1668.189: 16.1304% ( 448) 00:26:37.269 1668.189 - 1675.636: 16.9325% ( 522) 00:26:37.269 1675.636 - 1683.084: 17.7300% ( 519) 00:26:37.269 1683.084 - 1690.531: 18.8195% ( 709) 00:26:37.269 1690.531 - 1697.978: 19.6463% ( 538) 00:26:37.269 1697.978 - 1705.425: 20.4269% ( 508) 00:26:37.269 1705.425 - 1712.873: 21.2413% ( 530) 00:26:37.269 1712.873 - 1720.320: 22.2862% ( 680) 00:26:37.269 1720.320 - 1727.767: 23.3358% ( 683) 00:26:37.269 1727.767 - 1735.215: 24.3515% ( 661) 00:26:37.269 1735.215 - 1742.662: 25.1183% ( 499) 00:26:37.269 1742.662 - 1750.109: 26.0219% ( 588) 00:26:37.269 1750.109 - 1757.556: 26.8379% ( 531) 00:26:37.269 1757.556 - 1765.004: 27.7122% ( 569) 00:26:37.269 1765.004 - 1772.451: 28.5113% ( 520) 00:26:37.269 1772.451 - 1779.898: 29.7329% ( 795) 00:26:37.269 1779.898 - 1787.345: 30.8747% ( 743) 00:26:37.269 1787.345 - 1794.793: 32.2807% ( 915) 00:26:37.269 1794.793 - 1802.240: 33.5592% ( 832) 00:26:37.269 1802.240 - 1809.687: 34.6241% ( 693) 00:26:37.269 1809.687 - 1817.135: 35.6614% ( 675) 00:26:37.269 1817.135 - 1824.582: 36.7632% ( 717) 00:26:37.269 1824.582 - 1832.029: 37.9034% ( 742) 00:26:37.269 1832.029 - 1839.476: 38.8884% ( 641) 00:26:37.269 1839.476 - 1846.924: 40.0685% ( 768) 00:26:37.269 1846.924 - 1854.371: 41.2933% ( 797) 00:26:37.269 1854.371 - 1861.818: 42.4227% ( 735) 00:26:37.269 1861.818 - 1869.265: 43.7565% ( 868) 00:26:37.269 1869.265 - 1876.713: 44.9997% ( 809) 00:26:37.269 1876.713 - 1884.160: 46.0569% ( 688) 00:26:37.269 1884.160 - 1891.607: 47.2678% ( 788) 00:26:37.269 1891.607 - 1899.055: 48.7107% ( 939) 00:26:37.269 1899.055 - 1906.502: 49.7741% ( 692) 00:26:37.269 1906.502 - 1921.396: 51.9531% ( 1418) 00:26:37.269 1921.396 - 1936.291: 54.4256% ( 1609) 00:26:37.269 1936.291 - 1951.185: 56.6200% ( 1428) 00:26:37.269 1951.185 - 1966.080: 58.5561% ( 1260) 00:26:37.269 1966.080 - 1980.975: 60.4001% ( 1200) 00:26:37.269 1980.975 - 1995.869: 62.3363% ( 1260) 00:26:37.269 1995.869 - 2010.764: 64.0390% ( 1108) 00:26:37.269 2010.764 - 2025.658: 65.5357% ( 974) 00:26:37.269 2025.658 - 2040.553: 67.2322% ( 1104) 00:26:37.269 2040.553 - 2055.447: 68.7412% ( 982) 00:26:37.269 2055.447 - 2070.342: 70.1196% ( 897) 00:26:37.269 2070.342 - 2085.236: 71.2721% ( 750) 00:26:37.269 2085.236 - 2100.131: 72.4799% ( 786) 00:26:37.269 2100.131 - 2115.025: 73.7153% ( 804) 00:26:37.269 2115.025 - 2129.920: 74.8556% ( 742) 00:26:37.269 2129.920 - 2144.815: 75.9066% ( 684) 00:26:37.269 2144.815 - 2159.709: 77.3019% ( 908) 00:26:37.269 2159.709 - 2174.604: 78.4529% ( 749) 00:26:37.269 2174.604 - 2189.498: 79.6238% ( 762) 00:26:37.269 2189.498 - 2204.393: 80.8363% ( 789) 00:26:37.269 2204.393 - 2219.287: 81.8382% ( 652) 00:26:37.269 2219.287 - 2234.182: 83.0383% ( 781) 00:26:37.269 2234.182 - 2249.076: 84.0218% ( 640) 00:26:37.269 2249.076 - 2263.971: 85.0928% ( 697) 00:26:37.269 2263.971 - 2278.865: 86.1270% ( 673) 00:26:37.269 2278.865 - 2293.760: 86.9722% ( 550) 00:26:37.269 2293.760 - 2308.655: 87.6898% ( 467) 00:26:37.269 2308.655 - 2323.549: 88.2875% ( 389) 00:26:37.269 2323.549 - 2338.444: 88.9283% ( 417) 00:26:37.269 2338.444 - 2353.338: 89.6183% ( 449) 00:26:37.269 2353.338 - 2368.233: 90.1146% ( 323) 00:26:37.269 2368.233 - 2383.127: 90.5388% ( 276) 00:26:37.269 2383.127 - 2398.022: 91.1073% ( 370) 00:26:37.269 2398.022 - 2412.916: 91.5007% ( 256) 00:26:37.269 2412.916 - 2427.811: 91.8895% ( 253) 00:26:37.269 2427.811 - 2442.705: 92.2752% ( 251) 00:26:37.269 2442.705 - 2457.600: 92.6040% ( 214) 00:26:37.269 2457.600 - 2472.495: 92.9267% ( 210) 00:26:37.269 2472.495 - 2487.389: 93.2494% ( 210) 00:26:37.269 2487.389 - 2502.284: 93.5460% ( 193) 00:26:37.269 2502.284 - 2517.178: 93.8641% ( 207) 00:26:37.269 2517.178 - 2532.073: 94.1468% ( 184) 00:26:37.269 2532.073 - 2546.967: 94.4496% ( 197) 00:26:37.269 2546.967 - 2561.862: 94.6893% ( 156) 00:26:37.269 2561.862 - 2576.756: 94.9229% ( 152) 00:26:37.269 2576.756 - 2591.651: 95.1549% ( 151) 00:26:37.269 2591.651 - 2606.545: 95.3608% ( 134) 00:26:37.269 2606.545 - 2621.440: 95.5606% ( 130) 00:26:37.269 2621.440 - 2636.335: 95.7281% ( 109) 00:26:37.269 2636.335 - 2651.229: 95.9140% ( 121) 00:26:37.269 2651.229 - 2666.124: 96.0846% ( 111) 00:26:37.269 2666.124 - 2681.018: 96.2705% ( 121) 00:26:37.269 2681.018 - 2695.913: 96.4257% ( 101) 00:26:37.269 2695.913 - 2710.807: 96.5702% ( 94) 00:26:37.269 2710.807 - 2725.702: 96.7054% ( 88) 00:26:37.269 2725.702 - 2740.596: 96.8391% ( 87) 00:26:37.269 2740.596 - 2755.491: 96.9943% ( 101) 00:26:37.269 2755.491 - 2770.385: 97.1971% ( 132) 00:26:37.269 2770.385 - 2785.280: 97.3323% ( 88) 00:26:37.269 2785.280 - 2800.175: 97.4584% ( 82) 00:26:37.269 2800.175 - 2815.069: 97.5751% ( 76) 00:26:37.269 2815.069 - 2829.964: 97.6720% ( 63) 00:26:37.269 2829.964 - 2844.858: 97.7642% ( 60) 00:26:37.269 2844.858 - 2859.753: 97.8625% ( 64) 00:26:37.270 2859.753 - 2874.647: 97.9655% ( 67) 00:26:37.270 2874.647 - 2889.542: 98.1160% ( 98) 00:26:37.270 2889.542 - 2904.436: 98.2359% ( 78) 00:26:37.270 2904.436 - 2919.331: 98.3220% ( 56) 00:26:37.270 2919.331 - 2934.225: 98.4096% ( 57) 00:26:37.270 2934.225 - 2949.120: 98.4802% ( 46) 00:26:37.270 2949.120 - 2964.015: 98.5432% ( 41) 00:26:37.270 2964.015 - 2978.909: 98.6047% ( 40) 00:26:37.270 2978.909 - 2993.804: 98.6739% ( 45) 00:26:37.270 2993.804 - 3008.698: 98.7307% ( 37) 00:26:37.270 3008.698 - 3023.593: 98.7876% ( 37) 00:26:37.270 3023.593 - 3038.487: 98.8367% ( 32) 00:26:37.270 3038.487 - 3053.382: 98.8905% ( 35) 00:26:37.270 3053.382 - 3068.276: 98.9474% ( 37) 00:26:37.270 3068.276 - 3083.171: 98.9966% ( 32) 00:26:37.270 3083.171 - 3098.065: 99.0411% ( 29) 00:26:37.270 3098.065 - 3112.960: 99.0841% ( 28) 00:26:37.270 3112.960 - 3127.855: 99.1241% ( 26) 00:26:37.270 3127.855 - 3142.749: 99.1702% ( 30) 00:26:37.270 3142.749 - 3157.644: 99.2086% ( 25) 00:26:37.270 3157.644 - 3172.538: 99.2532% ( 29) 00:26:37.270 3172.538 - 3187.433: 99.2885% ( 23) 00:26:37.270 3187.433 - 3202.327: 99.3223% ( 22) 00:26:37.270 3202.327 - 3217.222: 99.3561% ( 22) 00:26:37.270 3217.222 - 3232.116: 99.3884% ( 21) 00:26:37.270 3232.116 - 3247.011: 99.4299% ( 27) 00:26:37.270 3247.011 - 3261.905: 99.4560% ( 17) 00:26:37.270 3261.905 - 3276.800: 99.4791% ( 15) 00:26:37.270 3276.800 - 3291.695: 99.5006% ( 14) 00:26:37.270 3291.695 - 3306.589: 99.5144% ( 9) 00:26:37.270 3306.589 - 3321.484: 99.5298% ( 10) 00:26:37.270 3321.484 - 3336.378: 99.5405% ( 7) 00:26:37.270 3336.378 - 3351.273: 99.5544% ( 9) 00:26:37.270 3351.273 - 3366.167: 99.5636% ( 6) 00:26:37.270 3366.167 - 3381.062: 99.5836% ( 13) 00:26:37.270 3381.062 - 3395.956: 99.6005% ( 11) 00:26:37.270 3395.956 - 3410.851: 99.6158% ( 10) 00:26:37.270 3410.851 - 3425.745: 99.6281% ( 8) 00:26:37.270 3425.745 - 3440.640: 99.6404% ( 8) 00:26:37.270 3440.640 - 3455.535: 99.6527% ( 8) 00:26:37.270 3455.535 - 3470.429: 99.6712% ( 12) 00:26:37.270 3470.429 - 3485.324: 99.6865% ( 10) 00:26:37.270 3485.324 - 3500.218: 99.6942% ( 5) 00:26:37.270 3500.218 - 3515.113: 99.6988% ( 3) 00:26:37.270 3515.113 - 3530.007: 99.7080% ( 6) 00:26:37.270 3530.007 - 3544.902: 99.7142% ( 4) 00:26:37.270 3544.902 - 3559.796: 99.7173% ( 2) 00:26:37.270 3559.796 - 3574.691: 99.7219% ( 3) 00:26:37.270 3574.691 - 3589.585: 99.7265% ( 3) 00:26:37.270 3589.585 - 3604.480: 99.7280% ( 1) 00:26:37.270 3604.480 - 3619.375: 99.7326% ( 3) 00:26:37.270 3619.375 - 3634.269: 99.7372% ( 3) 00:26:37.270 3634.269 - 3649.164: 99.7388% ( 1) 00:26:37.270 3649.164 - 3664.058: 99.7403% ( 1) 00:26:37.270 3664.058 - 3678.953: 99.7418% ( 1) 00:26:37.270 3693.847 - 3708.742: 99.7449% ( 2) 00:26:37.270 3723.636 - 3738.531: 99.7480% ( 2) 00:26:37.270 3753.425 - 3768.320: 99.7511% ( 2) 00:26:37.270 3768.320 - 3783.215: 99.7526% ( 1) 00:26:37.270 3798.109 - 3813.004: 99.7572% ( 3) 00:26:37.270 3813.004 - 3842.793: 99.7603% ( 2) 00:26:37.270 3842.793 - 3872.582: 99.7634% ( 2) 00:26:37.270 3872.582 - 3902.371: 99.7695% ( 4) 00:26:37.270 3902.371 - 3932.160: 99.7741% ( 3) 00:26:37.270 3932.160 - 3961.949: 99.7756% ( 1) 00:26:37.270 3961.949 - 3991.738: 99.7787% ( 2) 00:26:37.270 3991.738 - 4021.527: 99.7803% ( 1) 00:26:37.270 4021.527 - 4051.316: 99.7879% ( 5) 00:26:37.270 4081.105 - 4110.895: 99.7941% ( 4) 00:26:37.270 4110.895 - 4140.684: 99.7956% ( 1) 00:26:37.270 4140.684 - 4170.473: 99.8110% ( 10) 00:26:37.270 4170.473 - 4200.262: 99.8171% ( 4) 00:26:37.270 4200.262 - 4230.051: 99.8217% ( 3) 00:26:37.270 4230.051 - 4259.840: 99.8248% ( 2) 00:26:37.270 4259.840 - 4289.629: 99.8264% ( 1) 00:26:37.270 4289.629 - 4319.418: 99.8294% ( 2) 00:26:37.270 4319.418 - 4349.207: 99.8310% ( 1) 00:26:37.270 4349.207 - 4378.996: 99.8325% ( 1) 00:26:37.270 4378.996 - 4408.785: 99.8340% ( 1) 00:26:37.270 4408.785 - 4438.575: 99.8371% ( 2) 00:26:37.270 4438.575 - 4468.364: 99.8402% ( 2) 00:26:37.270 4468.364 - 4498.153: 99.8417% ( 1) 00:26:37.270 4498.153 - 4527.942: 99.8448% ( 2) 00:26:37.270 4527.942 - 4557.731: 99.8509% ( 4) 00:26:37.270 4557.731 - 4587.520: 99.8525% ( 1) 00:26:37.270 4587.520 - 4617.309: 99.8540% ( 1) 00:26:37.270 4617.309 - 4647.098: 99.8556% ( 1) 00:26:37.270 4647.098 - 4676.887: 99.8602% ( 3) 00:26:37.270 4676.887 - 4706.676: 99.8632% ( 2) 00:26:37.270 4706.676 - 4736.465: 99.8648% ( 1) 00:26:37.270 4736.465 - 4766.255: 99.8694% ( 3) 00:26:37.270 4766.255 - 4796.044: 99.8709% ( 1) 00:26:37.270 4796.044 - 4825.833: 99.8725% ( 1) 00:26:37.270 4825.833 - 4855.622: 99.8740% ( 1) 00:26:37.270 4855.622 - 4885.411: 99.8771% ( 2) 00:26:37.270 4885.411 - 4915.200: 99.8801% ( 2) 00:26:37.270 4915.200 - 4944.989: 99.8817% ( 1) 00:26:37.270 4944.989 - 4974.778: 99.8832% ( 1) 00:26:37.270 4974.778 - 5004.567: 99.8848% ( 1) 00:26:37.270 5004.567 - 5034.356: 99.8863% ( 1) 00:26:37.270 5034.356 - 5064.145: 99.8878% ( 1) 00:26:37.270 5064.145 - 5093.935: 99.8894% ( 1) 00:26:37.270 5093.935 - 5123.724: 99.8909% ( 1) 00:26:37.270 5123.724 - 5153.513: 99.8924% ( 1) 00:26:37.270 5153.513 - 5183.302: 99.8940% ( 1) 00:26:37.270 5183.302 - 5213.091: 99.8955% ( 1) 00:26:37.270 5213.091 - 5242.880: 99.8986% ( 2) 00:26:37.270 5242.880 - 5272.669: 99.9001% ( 1) 00:26:37.270 5272.669 - 5302.458: 99.9017% ( 1) 00:26:37.270 5332.247 - 5362.036: 99.9032% ( 1) 00:26:37.270 5362.036 - 5391.825: 99.9047% ( 1) 00:26:37.270 5451.404 - 5481.193: 99.9063% ( 1) 00:26:37.270 5659.927 - 5689.716: 99.9078% ( 1) 00:26:37.270 5689.716 - 5719.505: 99.9093% ( 1) 00:26:37.270 5779.084 - 5808.873: 99.9109% ( 1) 00:26:37.270 5838.662 - 5868.451: 99.9139% ( 2) 00:26:37.270 6166.342 - 6196.131: 99.9324% ( 12) 00:26:37.270 6225.920 - 6255.709: 99.9370% ( 3) 00:26:37.270 6345.076 - 6374.865: 99.9385% ( 1) 00:26:37.270 6464.233 - 6494.022: 99.9401% ( 1) 00:26:37.270 6821.702 - 6851.491: 99.9431% ( 2) 00:26:37.270 7268.538 - 7298.327: 99.9447% ( 1) 00:26:37.270 7328.116 - 7357.905: 99.9462% ( 1) 00:26:37.270 7357.905 - 7387.695: 99.9508% ( 3) 00:26:37.270 7477.062 - 7506.851: 99.9524% ( 1) 00:26:37.270 8817.571 - 8877.149: 99.9554% ( 2) 00:26:37.270 9055.884 - 9115.462: 99.9570% ( 1) 00:26:37.270 9115.462 - 9175.040: 99.9616% ( 3) 00:26:37.270 9234.618 - 9294.196: 99.9631% ( 1) 00:26:37.270 9294.196 - 9353.775: 99.9708% ( 5) 00:26:37.270 9353.775 - 9413.353: 99.9739% ( 2) 00:26:37.270 9413.353 - 9472.931: 99.9754% ( 1) 00:26:37.270 9830.400 - 9889.978: 99.9770% ( 1) 00:26:37.270 9889.978 - 9949.556: 99.9785% ( 1) 00:26:37.270 10009.135 - 10068.713: 99.9816% ( 2) 00:26:37.270 10068.713 - 10128.291: 99.9831% ( 1) 00:26:37.270 10247.447 - 10307.025: 99.9846% ( 1) 00:26:37.270 10307.025 - 10366.604: 99.9877% ( 2) 00:26:37.270 10426.182 - 10485.760: 99.9892% ( 1) 00:26:37.270 10485.760 - 10545.338: 99.9908% ( 1) 00:26:37.270 10604.916 - 10664.495: 99.9923% ( 1) 00:26:37.270 10783.651 - 10843.229: 99.9939% ( 1) 00:26:37.270 10843.229 - 10902.807: 100.0000% ( 4) 00:26:37.270 00:26:37.270 21:09:05 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:26:37.270 00:26:37.270 real 0m2.610s 00:26:37.270 user 0m2.264s 00:26:37.270 sys 0m0.212s 00:26:37.270 21:09:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.270 ************************************ 00:26:37.270 END TEST nvme_perf 00:26:37.270 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.270 ************************************ 00:26:37.270 21:09:05 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:26:37.270 21:09:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:26:37.270 21:09:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:37.270 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.270 ************************************ 00:26:37.270 START TEST nvme_hello_world 00:26:37.270 ************************************ 00:26:37.270 21:09:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:26:37.528 Initializing NVMe Controllers 00:26:37.528 Attached to 0000:00:06.0 00:26:37.528 Namespace ID: 1 size: 5GB 00:26:37.528 Initialization complete. 00:26:37.528 INFO: using host memory buffer for IO 00:26:37.528 Hello world! 00:26:37.528 00:26:37.528 real 0m0.313s 00:26:37.528 user 0m0.100s 00:26:37.528 sys 0m0.139s 00:26:37.528 21:09:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.528 ************************************ 00:26:37.528 END TEST nvme_hello_world 00:26:37.528 ************************************ 00:26:37.528 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.528 21:09:05 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:26:37.528 21:09:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:37.528 21:09:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:37.528 21:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.528 ************************************ 00:26:37.528 START TEST nvme_sgl 00:26:37.528 ************************************ 00:26:37.528 21:09:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:26:37.785 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:26:37.785 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:26:37.785 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:26:37.785 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:26:37.785 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:26:38.044 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:26:38.044 NVMe Readv/Writev Request test 00:26:38.044 Attached to 0000:00:06.0 00:26:38.044 0000:00:06.0: build_io_request_2 test passed 00:26:38.044 0000:00:06.0: build_io_request_4 test passed 00:26:38.044 0000:00:06.0: build_io_request_5 test passed 00:26:38.044 0000:00:06.0: build_io_request_6 test passed 00:26:38.044 0000:00:06.0: build_io_request_7 test passed 00:26:38.044 0000:00:06.0: build_io_request_10 test passed 00:26:38.044 Cleaning up... 00:26:38.044 00:26:38.044 real 0m0.356s 00:26:38.044 user 0m0.188s 00:26:38.044 sys 0m0.094s 00:26:38.044 21:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.044 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:38.044 ************************************ 00:26:38.044 END TEST nvme_sgl 00:26:38.044 ************************************ 00:26:38.044 21:09:06 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:26:38.044 21:09:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:38.044 21:09:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.044 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:38.044 ************************************ 00:26:38.044 START TEST nvme_e2edp 00:26:38.044 ************************************ 00:26:38.044 21:09:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:26:38.303 NVMe Write/Read with End-to-End data protection test 00:26:38.303 Attached to 0000:00:06.0 00:26:38.303 Cleaning up... 00:26:38.303 00:26:38.303 real 0m0.275s 00:26:38.303 user 0m0.070s 00:26:38.303 sys 0m0.122s 00:26:38.303 21:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.303 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:38.303 ************************************ 00:26:38.303 END TEST nvme_e2edp 00:26:38.303 ************************************ 00:26:38.303 21:09:06 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:26:38.303 21:09:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:38.303 21:09:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.303 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:38.303 ************************************ 00:26:38.303 START TEST nvme_reserve 00:26:38.303 ************************************ 00:26:38.303 21:09:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:26:38.562 ===================================================== 00:26:38.562 NVMe Controller at PCI bus 0, device 6, function 0 00:26:38.562 ===================================================== 00:26:38.562 Reservations: Not Supported 00:26:38.562 Reservation test passed 00:26:38.562 00:26:38.562 real 0m0.306s 00:26:38.562 user 0m0.134s 00:26:38.562 sys 0m0.105s 00:26:38.562 21:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.562 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:38.562 ************************************ 00:26:38.562 END TEST nvme_reserve 00:26:38.562 ************************************ 00:26:38.562 21:09:06 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:26:38.562 21:09:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:38.562 21:09:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.562 21:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:38.562 ************************************ 00:26:38.562 START TEST nvme_err_injection 00:26:38.562 ************************************ 00:26:38.562 21:09:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:26:38.820 NVMe Error Injection test 00:26:38.820 Attached to 0000:00:06.0 00:26:38.820 0000:00:06.0: get features failed as expected 00:26:38.820 0000:00:06.0: get features successfully as expected 00:26:38.820 0000:00:06.0: read failed as expected 00:26:38.820 0000:00:06.0: read successfully as expected 00:26:38.820 Cleaning up... 00:26:39.079 00:26:39.079 real 0m0.273s 00:26:39.079 user 0m0.105s 00:26:39.079 sys 0m0.104s 00:26:39.079 21:09:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.079 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:39.079 ************************************ 00:26:39.079 END TEST nvme_err_injection 00:26:39.079 ************************************ 00:26:39.079 21:09:07 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:26:39.079 21:09:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:26:39.079 21:09:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:39.079 21:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:39.079 ************************************ 00:26:39.079 START TEST nvme_overhead 00:26:39.079 ************************************ 00:26:39.079 21:09:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:26:40.458 Initializing NVMe Controllers 00:26:40.458 Attached to 0000:00:06.0 00:26:40.458 Initialization complete. Launching workers. 00:26:40.458 submit (in ns) avg, min, max = 15833.3, 12200.0, 81498.2 00:26:40.458 complete (in ns) avg, min, max = 11767.4, 7678.2, 1005308.6 00:26:40.458 00:26:40.458 Submit histogram 00:26:40.458 ================ 00:26:40.458 Range in us Cumulative Count 00:26:40.458 12.160 - 12.218: 0.0127% ( 1) 00:26:40.458 12.218 - 12.276: 0.0636% ( 4) 00:26:40.458 12.276 - 12.335: 0.2034% ( 11) 00:26:40.458 12.335 - 12.393: 0.5593% ( 28) 00:26:40.458 12.393 - 12.451: 1.7542% ( 94) 00:26:40.458 12.451 - 12.509: 3.4066% ( 130) 00:26:40.458 12.509 - 12.567: 5.3388% ( 152) 00:26:40.458 12.567 - 12.625: 7.1946% ( 146) 00:26:40.458 12.625 - 12.684: 9.5589% ( 186) 00:26:40.458 12.684 - 12.742: 12.3427% ( 219) 00:26:40.458 12.742 - 12.800: 15.5460% ( 252) 00:26:40.458 12.800 - 12.858: 18.4568% ( 229) 00:26:40.458 12.858 - 12.916: 21.2660% ( 221) 00:26:40.458 12.916 - 12.975: 24.0880% ( 222) 00:26:40.458 12.975 - 13.033: 26.6684% ( 203) 00:26:40.458 13.033 - 13.091: 30.1258% ( 272) 00:26:40.458 13.091 - 13.149: 33.2274% ( 244) 00:26:40.458 13.149 - 13.207: 35.7188% ( 196) 00:26:40.458 13.207 - 13.265: 37.3967% ( 132) 00:26:40.458 13.265 - 13.324: 39.4941% ( 165) 00:26:40.458 13.324 - 13.382: 41.3118% ( 143) 00:26:40.458 13.382 - 13.440: 43.1677% ( 146) 00:26:40.458 13.440 - 13.498: 44.4388% ( 100) 00:26:40.458 13.498 - 13.556: 45.6972% ( 99) 00:26:40.458 13.556 - 13.615: 46.5616% ( 68) 00:26:40.458 13.615 - 13.673: 47.5149% ( 75) 00:26:40.458 13.673 - 13.731: 48.6335% ( 88) 00:26:40.458 13.731 - 13.789: 49.4979% ( 68) 00:26:40.458 13.789 - 13.847: 50.5275% ( 81) 00:26:40.458 13.847 - 13.905: 51.3538% ( 65) 00:26:40.458 13.905 - 13.964: 52.3452% ( 78) 00:26:40.458 13.964 - 14.022: 53.4384% ( 86) 00:26:40.458 14.022 - 14.080: 55.1163% ( 132) 00:26:40.458 14.080 - 14.138: 57.4043% ( 180) 00:26:40.458 14.138 - 14.196: 60.3152% ( 229) 00:26:40.458 14.196 - 14.255: 62.8702% ( 201) 00:26:40.458 14.255 - 14.313: 65.0566% ( 172) 00:26:40.458 14.313 - 14.371: 67.0522% ( 157) 00:26:40.458 14.371 - 14.429: 68.5140% ( 115) 00:26:40.458 14.429 - 14.487: 69.6581% ( 90) 00:26:40.458 14.487 - 14.545: 71.0182% ( 107) 00:26:40.458 14.545 - 14.604: 72.2003% ( 93) 00:26:40.458 14.604 - 14.662: 73.1664% ( 76) 00:26:40.458 14.662 - 14.720: 74.2596% ( 86) 00:26:40.458 14.720 - 14.778: 74.9714% ( 56) 00:26:40.458 14.778 - 14.836: 75.9120% ( 74) 00:26:40.458 14.836 - 14.895: 76.9035% ( 78) 00:26:40.458 14.895 - 15.011: 78.6068% ( 134) 00:26:40.458 15.011 - 15.127: 79.8653% ( 99) 00:26:40.458 15.127 - 15.244: 80.8567% ( 78) 00:26:40.458 15.244 - 15.360: 81.5813% ( 57) 00:26:40.458 15.360 - 15.476: 82.1660% ( 46) 00:26:40.458 15.476 - 15.593: 82.6363% ( 37) 00:26:40.458 15.593 - 15.709: 83.0304% ( 31) 00:26:40.458 15.709 - 15.825: 83.4244% ( 31) 00:26:40.458 15.825 - 15.942: 83.7422% ( 25) 00:26:40.458 15.942 - 16.058: 84.0219% ( 22) 00:26:40.458 16.058 - 16.175: 84.2634% ( 19) 00:26:40.458 16.175 - 16.291: 84.4159% ( 12) 00:26:40.458 16.291 - 16.407: 84.5049% ( 7) 00:26:40.458 16.407 - 16.524: 84.7083% ( 16) 00:26:40.458 16.524 - 16.640: 84.8481% ( 11) 00:26:40.458 16.640 - 16.756: 84.9371% ( 7) 00:26:40.458 16.756 - 16.873: 85.0642% ( 10) 00:26:40.458 16.873 - 16.989: 85.2040% ( 11) 00:26:40.458 16.989 - 17.105: 85.2422% ( 3) 00:26:40.458 17.105 - 17.222: 85.3184% ( 6) 00:26:40.458 17.222 - 17.338: 85.3693% ( 4) 00:26:40.458 17.338 - 17.455: 85.4328% ( 5) 00:26:40.458 17.455 - 17.571: 85.4582% ( 2) 00:26:40.458 17.571 - 17.687: 85.5218% ( 5) 00:26:40.458 17.687 - 17.804: 85.6108% ( 7) 00:26:40.458 17.804 - 17.920: 85.6489% ( 3) 00:26:40.458 17.920 - 18.036: 85.6998% ( 4) 00:26:40.458 18.036 - 18.153: 85.7887% ( 7) 00:26:40.458 18.153 - 18.269: 85.8142% ( 2) 00:26:40.458 18.269 - 18.385: 85.8777% ( 5) 00:26:40.458 18.385 - 18.502: 85.9159% ( 3) 00:26:40.458 18.502 - 18.618: 85.9540% ( 3) 00:26:40.458 18.618 - 18.735: 86.0048% ( 4) 00:26:40.458 18.735 - 18.851: 86.0938% ( 7) 00:26:40.458 18.851 - 18.967: 86.1192% ( 2) 00:26:40.458 18.967 - 19.084: 86.1574% ( 3) 00:26:40.458 19.084 - 19.200: 86.1955% ( 3) 00:26:40.458 19.200 - 19.316: 86.2209% ( 2) 00:26:40.458 19.316 - 19.433: 86.2463% ( 2) 00:26:40.458 19.433 - 19.549: 86.3226% ( 6) 00:26:40.458 19.549 - 19.665: 86.3480% ( 2) 00:26:40.458 19.665 - 19.782: 86.3735% ( 2) 00:26:40.458 19.782 - 19.898: 86.4243% ( 4) 00:26:40.458 19.898 - 20.015: 86.4497% ( 2) 00:26:40.458 20.015 - 20.131: 86.4751% ( 2) 00:26:40.458 20.131 - 20.247: 86.5387% ( 5) 00:26:40.458 20.247 - 20.364: 86.5768% ( 3) 00:26:40.458 20.364 - 20.480: 86.6150% ( 3) 00:26:40.458 20.480 - 20.596: 86.6658% ( 4) 00:26:40.458 20.596 - 20.713: 86.6785% ( 1) 00:26:40.458 20.713 - 20.829: 86.7548% ( 6) 00:26:40.458 20.829 - 20.945: 86.7675% ( 1) 00:26:40.458 20.945 - 21.062: 86.7929% ( 2) 00:26:40.458 21.062 - 21.178: 86.8184% ( 2) 00:26:40.458 21.178 - 21.295: 86.8946% ( 6) 00:26:40.458 21.295 - 21.411: 86.9328% ( 3) 00:26:40.458 21.411 - 21.527: 86.9455% ( 1) 00:26:40.458 21.527 - 21.644: 87.0344% ( 7) 00:26:40.458 21.644 - 21.760: 87.0599% ( 2) 00:26:40.458 21.993 - 22.109: 87.0980% ( 3) 00:26:40.458 22.109 - 22.225: 87.1488% ( 4) 00:26:40.458 22.225 - 22.342: 87.1997% ( 4) 00:26:40.458 22.342 - 22.458: 87.2378% ( 3) 00:26:40.458 22.458 - 22.575: 87.2633% ( 2) 00:26:40.458 22.575 - 22.691: 87.2760% ( 1) 00:26:40.458 22.691 - 22.807: 87.3141% ( 3) 00:26:40.458 22.807 - 22.924: 87.3522% ( 3) 00:26:40.458 22.924 - 23.040: 87.4031% ( 4) 00:26:40.458 23.040 - 23.156: 87.4539% ( 4) 00:26:40.458 23.156 - 23.273: 87.5048% ( 4) 00:26:40.458 23.273 - 23.389: 87.5175% ( 1) 00:26:40.458 23.389 - 23.505: 87.5429% ( 2) 00:26:40.458 23.505 - 23.622: 87.5683% ( 2) 00:26:40.458 23.738 - 23.855: 87.5937% ( 2) 00:26:40.458 23.855 - 23.971: 87.6446% ( 4) 00:26:40.458 23.971 - 24.087: 87.7081% ( 5) 00:26:40.458 24.087 - 24.204: 87.7590% ( 4) 00:26:40.458 24.204 - 24.320: 87.7971% ( 3) 00:26:40.458 24.320 - 24.436: 87.8225% ( 2) 00:26:40.459 24.553 - 24.669: 87.8734% ( 4) 00:26:40.459 24.669 - 24.785: 87.8988% ( 2) 00:26:40.459 24.785 - 24.902: 87.9497% ( 4) 00:26:40.459 24.902 - 25.018: 87.9878% ( 3) 00:26:40.459 25.018 - 25.135: 88.0386% ( 4) 00:26:40.459 25.135 - 25.251: 88.0641% ( 2) 00:26:40.459 25.251 - 25.367: 88.0768% ( 1) 00:26:40.459 25.367 - 25.484: 88.0895% ( 1) 00:26:40.459 25.484 - 25.600: 88.1149% ( 2) 00:26:40.459 25.600 - 25.716: 88.1530% ( 3) 00:26:40.459 25.833 - 25.949: 88.1658% ( 1) 00:26:40.459 25.949 - 26.065: 88.2166% ( 4) 00:26:40.459 26.065 - 26.182: 88.2929% ( 6) 00:26:40.459 26.415 - 26.531: 88.3056% ( 1) 00:26:40.459 26.531 - 26.647: 88.3310% ( 2) 00:26:40.459 26.764 - 26.880: 88.3437% ( 1) 00:26:40.459 26.880 - 26.996: 88.3691% ( 2) 00:26:40.459 26.996 - 27.113: 88.3946% ( 2) 00:26:40.459 27.113 - 27.229: 88.4454% ( 4) 00:26:40.459 27.229 - 27.345: 88.5598% ( 9) 00:26:40.459 27.345 - 27.462: 88.8140% ( 20) 00:26:40.459 27.462 - 27.578: 89.1700% ( 28) 00:26:40.459 27.578 - 27.695: 89.6657% ( 39) 00:26:40.459 27.695 - 27.811: 90.0597% ( 31) 00:26:40.459 27.811 - 27.927: 90.7589% ( 55) 00:26:40.459 27.927 - 28.044: 91.2165% ( 36) 00:26:40.459 28.044 - 28.160: 91.7758% ( 44) 00:26:40.459 28.160 - 28.276: 92.3351% ( 44) 00:26:40.459 28.276 - 28.393: 92.8435% ( 40) 00:26:40.459 28.393 - 28.509: 93.6443% ( 63) 00:26:40.459 28.509 - 28.625: 94.4579% ( 64) 00:26:40.459 28.625 - 28.742: 94.8773% ( 33) 00:26:40.459 28.742 - 28.858: 95.3604% ( 38) 00:26:40.459 28.858 - 28.975: 95.7036% ( 27) 00:26:40.459 28.975 - 29.091: 96.0468% ( 27) 00:26:40.459 29.091 - 29.207: 96.3391% ( 23) 00:26:40.459 29.207 - 29.324: 96.5425% ( 16) 00:26:40.459 29.324 - 29.440: 96.7078% ( 13) 00:26:40.459 29.440 - 29.556: 96.8730% ( 13) 00:26:40.459 29.556 - 29.673: 97.0383% ( 13) 00:26:40.459 29.673 - 29.789: 97.1654% ( 10) 00:26:40.459 29.789 - 30.022: 97.4959% ( 26) 00:26:40.459 30.022 - 30.255: 97.7882% ( 23) 00:26:40.459 30.255 - 30.487: 98.0425% ( 20) 00:26:40.459 30.487 - 30.720: 98.1569% ( 9) 00:26:40.459 30.720 - 30.953: 98.2840% ( 10) 00:26:40.459 30.953 - 31.185: 98.4111% ( 10) 00:26:40.459 31.185 - 31.418: 98.5128% ( 8) 00:26:40.459 31.418 - 31.651: 98.5763% ( 5) 00:26:40.459 31.651 - 31.884: 98.6399% ( 5) 00:26:40.459 31.884 - 32.116: 98.6780% ( 3) 00:26:40.459 32.116 - 32.349: 98.7416% ( 5) 00:26:40.459 32.349 - 32.582: 98.8051% ( 5) 00:26:40.459 32.582 - 32.815: 98.8560% ( 4) 00:26:40.459 32.815 - 33.047: 98.8941% ( 3) 00:26:40.459 33.513 - 33.745: 98.9068% ( 1) 00:26:40.459 33.745 - 33.978: 98.9195% ( 1) 00:26:40.459 33.978 - 34.211: 98.9322% ( 1) 00:26:40.459 34.211 - 34.444: 98.9577% ( 2) 00:26:40.459 34.676 - 34.909: 98.9831% ( 2) 00:26:40.459 34.909 - 35.142: 98.9958% ( 1) 00:26:40.459 35.142 - 35.375: 99.0085% ( 1) 00:26:40.459 35.607 - 35.840: 99.0212% ( 1) 00:26:40.459 36.073 - 36.305: 99.0467% ( 2) 00:26:40.459 36.305 - 36.538: 99.0594% ( 1) 00:26:40.459 36.771 - 37.004: 99.0721% ( 1) 00:26:40.459 38.167 - 38.400: 99.0848% ( 1) 00:26:40.459 38.865 - 39.098: 99.0975% ( 1) 00:26:40.459 39.564 - 39.796: 99.1356% ( 3) 00:26:40.459 39.796 - 40.029: 99.1483% ( 1) 00:26:40.459 40.262 - 40.495: 99.1611% ( 1) 00:26:40.459 40.495 - 40.727: 99.1738% ( 1) 00:26:40.459 40.727 - 40.960: 99.1992% ( 2) 00:26:40.459 41.425 - 41.658: 99.2246% ( 2) 00:26:40.459 41.658 - 41.891: 99.2755% ( 4) 00:26:40.459 42.124 - 42.356: 99.3009% ( 2) 00:26:40.459 42.589 - 42.822: 99.3136% ( 1) 00:26:40.459 42.822 - 43.055: 99.3390% ( 2) 00:26:40.459 43.287 - 43.520: 99.3771% ( 3) 00:26:40.459 43.520 - 43.753: 99.4026% ( 2) 00:26:40.459 43.753 - 43.985: 99.4407% ( 3) 00:26:40.459 43.985 - 44.218: 99.4661% ( 2) 00:26:40.459 44.218 - 44.451: 99.5043% ( 3) 00:26:40.459 44.451 - 44.684: 99.5551% ( 4) 00:26:40.459 44.684 - 44.916: 99.5932% ( 3) 00:26:40.459 44.916 - 45.149: 99.6187% ( 2) 00:26:40.459 45.149 - 45.382: 99.6314% ( 1) 00:26:40.459 45.615 - 45.847: 99.6441% ( 1) 00:26:40.459 45.847 - 46.080: 99.6949% ( 4) 00:26:40.459 46.080 - 46.313: 99.7204% ( 2) 00:26:40.459 46.313 - 46.545: 99.7331% ( 1) 00:26:40.459 46.545 - 46.778: 99.7458% ( 1) 00:26:40.459 46.778 - 47.011: 99.7712% ( 2) 00:26:40.459 47.011 - 47.244: 99.8093% ( 3) 00:26:40.459 47.709 - 47.942: 99.8220% ( 1) 00:26:40.459 48.873 - 49.105: 99.8348% ( 1) 00:26:40.459 49.571 - 49.804: 99.8602% ( 2) 00:26:40.459 50.502 - 50.735: 99.8729% ( 1) 00:26:40.459 52.829 - 53.062: 99.8983% ( 2) 00:26:40.459 53.760 - 53.993: 99.9110% ( 1) 00:26:40.459 55.156 - 55.389: 99.9364% ( 2) 00:26:40.459 56.320 - 56.553: 99.9619% ( 2) 00:26:40.459 60.509 - 60.975: 99.9746% ( 1) 00:26:40.459 73.076 - 73.542: 99.9873% ( 1) 00:26:40.459 81.455 - 81.920: 100.0000% ( 1) 00:26:40.459 00:26:40.459 Complete histogram 00:26:40.459 ================== 00:26:40.459 Range in us Cumulative Count 00:26:40.459 7.622 - 7.680: 0.0127% ( 1) 00:26:40.459 7.680 - 7.738: 0.4830% ( 37) 00:26:40.459 7.738 - 7.796: 1.8431% ( 107) 00:26:40.459 7.796 - 7.855: 3.8007% ( 154) 00:26:40.459 7.855 - 7.913: 5.5930% ( 141) 00:26:40.459 7.913 - 7.971: 8.1861% ( 204) 00:26:40.459 7.971 - 8.029: 11.0461% ( 225) 00:26:40.459 8.029 - 8.087: 13.5757% ( 199) 00:26:40.459 8.087 - 8.145: 17.5671% ( 314) 00:26:40.459 8.145 - 8.204: 20.8720% ( 260) 00:26:40.459 8.204 - 8.262: 23.7575% ( 227) 00:26:40.459 8.262 - 8.320: 25.7786% ( 159) 00:26:40.459 8.320 - 8.378: 30.3419% ( 359) 00:26:40.459 8.378 - 8.436: 34.0791% ( 294) 00:26:40.459 8.436 - 8.495: 36.8883% ( 221) 00:26:40.459 8.495 - 8.553: 38.6806% ( 141) 00:26:40.459 8.553 - 8.611: 41.0957% ( 190) 00:26:40.459 8.611 - 8.669: 43.8159% ( 214) 00:26:40.459 8.669 - 8.727: 46.1802% ( 186) 00:26:40.459 8.727 - 8.785: 47.8200% ( 129) 00:26:40.459 8.785 - 8.844: 49.2564% ( 113) 00:26:40.459 8.844 - 8.902: 50.2224% ( 76) 00:26:40.459 8.902 - 8.960: 51.5317% ( 103) 00:26:40.459 8.960 - 9.018: 54.1884% ( 209) 00:26:40.459 9.018 - 9.076: 58.1670% ( 313) 00:26:40.459 9.076 - 9.135: 60.8873% ( 214) 00:26:40.459 9.135 - 9.193: 63.6329% ( 216) 00:26:40.459 9.193 - 9.251: 65.3998% ( 139) 00:26:40.459 9.251 - 9.309: 66.4802% ( 85) 00:26:40.459 9.309 - 9.367: 67.1285% ( 51) 00:26:40.459 9.367 - 9.425: 67.4844% ( 28) 00:26:40.459 9.425 - 9.484: 67.6878% ( 16) 00:26:40.459 9.484 - 9.542: 67.7514% ( 5) 00:26:40.459 9.542 - 9.600: 67.8912% ( 11) 00:26:40.459 9.600 - 9.658: 68.0691% ( 14) 00:26:40.459 9.658 - 9.716: 68.1963% ( 10) 00:26:40.459 9.716 - 9.775: 68.3615% ( 13) 00:26:40.459 9.775 - 9.833: 68.5395% ( 14) 00:26:40.459 9.833 - 9.891: 68.6539% ( 9) 00:26:40.459 9.891 - 9.949: 68.7683% ( 9) 00:26:40.459 9.949 - 10.007: 68.8318% ( 5) 00:26:40.459 10.007 - 10.065: 68.8700% ( 3) 00:26:40.459 10.065 - 10.124: 68.9589% ( 7) 00:26:40.459 10.124 - 10.182: 69.0098% ( 4) 00:26:40.459 10.182 - 10.240: 69.0352% ( 2) 00:26:40.459 10.240 - 10.298: 69.0733% ( 3) 00:26:40.459 10.298 - 10.356: 69.0988% ( 2) 00:26:40.459 10.356 - 10.415: 69.1369% ( 3) 00:26:40.459 10.415 - 10.473: 69.1496% ( 1) 00:26:40.459 10.473 - 10.531: 69.1750% ( 2) 00:26:40.459 10.880 - 10.938: 69.1877% ( 1) 00:26:40.459 11.404 - 11.462: 69.2005% ( 1) 00:26:40.459 11.462 - 11.520: 69.2386% ( 3) 00:26:40.459 11.636 - 11.695: 69.2513% ( 1) 00:26:40.459 11.753 - 11.811: 69.2640% ( 1) 00:26:40.459 11.985 - 12.044: 69.2767% ( 1) 00:26:40.459 12.044 - 12.102: 69.2894% ( 1) 00:26:40.459 12.102 - 12.160: 69.3021% ( 1) 00:26:40.459 12.276 - 12.335: 69.3530% ( 4) 00:26:40.459 12.335 - 12.393: 69.3657% ( 1) 00:26:40.459 12.393 - 12.451: 69.4038% ( 3) 00:26:40.459 12.509 - 12.567: 69.4293% ( 2) 00:26:40.459 12.567 - 12.625: 69.4420% ( 1) 00:26:40.459 12.742 - 12.800: 69.5055% ( 5) 00:26:40.459 12.800 - 12.858: 69.5945% ( 7) 00:26:40.459 12.858 - 12.916: 69.7725% ( 14) 00:26:40.459 12.916 - 12.975: 69.9631% ( 15) 00:26:40.459 12.975 - 13.033: 70.2682% ( 24) 00:26:40.459 13.033 - 13.091: 70.7131% ( 35) 00:26:40.459 13.091 - 13.149: 71.1326% ( 33) 00:26:40.459 13.149 - 13.207: 71.8063% ( 53) 00:26:40.459 13.207 - 13.265: 72.3529% ( 43) 00:26:40.459 13.265 - 13.324: 73.3062% ( 75) 00:26:40.459 13.324 - 13.382: 74.0435% ( 58) 00:26:40.459 13.382 - 13.440: 74.8824% ( 66) 00:26:40.459 13.440 - 13.498: 75.8231% ( 74) 00:26:40.459 13.498 - 13.556: 76.5349% ( 56) 00:26:40.459 13.556 - 13.615: 77.7425% ( 95) 00:26:40.459 13.615 - 13.673: 78.7340% ( 78) 00:26:40.459 13.673 - 13.731: 79.5983% ( 68) 00:26:40.459 13.731 - 13.789: 80.3356% ( 58) 00:26:40.459 13.789 - 13.847: 80.8949% ( 44) 00:26:40.460 13.847 - 13.905: 81.6830% ( 62) 00:26:40.460 13.905 - 13.964: 82.1533% ( 37) 00:26:40.460 13.964 - 14.022: 82.6490% ( 39) 00:26:40.460 14.022 - 14.080: 83.2592% ( 48) 00:26:40.460 14.080 - 14.138: 83.7168% ( 36) 00:26:40.460 14.138 - 14.196: 84.0854% ( 29) 00:26:40.460 14.196 - 14.255: 84.3524% ( 21) 00:26:40.460 14.255 - 14.313: 84.5812% ( 18) 00:26:40.460 14.313 - 14.371: 84.7591% ( 14) 00:26:40.460 14.371 - 14.429: 84.9371% ( 14) 00:26:40.460 14.429 - 14.487: 85.1277% ( 15) 00:26:40.460 14.487 - 14.545: 85.2930% ( 13) 00:26:40.460 14.545 - 14.604: 85.4074% ( 9) 00:26:40.460 14.604 - 14.662: 85.5472% ( 11) 00:26:40.460 14.662 - 14.720: 85.6616% ( 9) 00:26:40.460 14.720 - 14.778: 85.7633% ( 8) 00:26:40.460 14.778 - 14.836: 85.8904% ( 10) 00:26:40.460 14.836 - 14.895: 85.9667% ( 6) 00:26:40.460 14.895 - 15.011: 86.1065% ( 11) 00:26:40.460 15.011 - 15.127: 86.3099% ( 16) 00:26:40.460 15.127 - 15.244: 86.4624% ( 12) 00:26:40.460 15.244 - 15.360: 86.5641% ( 8) 00:26:40.460 15.360 - 15.476: 86.6658% ( 8) 00:26:40.460 15.476 - 15.593: 86.7548% ( 7) 00:26:40.460 15.593 - 15.709: 86.7675% ( 1) 00:26:40.460 15.709 - 15.825: 86.8692% ( 8) 00:26:40.460 15.825 - 15.942: 86.9582% ( 7) 00:26:40.460 15.942 - 16.058: 86.9836% ( 2) 00:26:40.460 16.058 - 16.175: 87.0344% ( 4) 00:26:40.460 16.291 - 16.407: 87.0853% ( 4) 00:26:40.460 16.407 - 16.524: 87.1488% ( 5) 00:26:40.460 16.524 - 16.640: 87.1997% ( 4) 00:26:40.460 16.640 - 16.756: 87.2251% ( 2) 00:26:40.460 16.756 - 16.873: 87.2633% ( 3) 00:26:40.460 16.989 - 17.105: 87.3014% ( 3) 00:26:40.460 17.105 - 17.222: 87.3268% ( 2) 00:26:40.460 17.222 - 17.338: 87.3395% ( 1) 00:26:40.460 17.338 - 17.455: 87.3649% ( 2) 00:26:40.460 17.455 - 17.571: 87.3904% ( 2) 00:26:40.460 17.571 - 17.687: 87.4031% ( 1) 00:26:40.460 17.804 - 17.920: 87.4412% ( 3) 00:26:40.460 17.920 - 18.036: 87.4539% ( 1) 00:26:40.460 18.036 - 18.153: 87.4793% ( 2) 00:26:40.460 18.153 - 18.269: 87.5048% ( 2) 00:26:40.460 18.269 - 18.385: 87.5429% ( 3) 00:26:40.460 18.385 - 18.502: 87.5556% ( 1) 00:26:40.460 18.502 - 18.618: 87.5683% ( 1) 00:26:40.460 18.735 - 18.851: 87.5937% ( 2) 00:26:40.460 18.851 - 18.967: 87.6065% ( 1) 00:26:40.460 18.967 - 19.084: 87.6192% ( 1) 00:26:40.460 19.084 - 19.200: 87.6573% ( 3) 00:26:40.460 19.200 - 19.316: 87.6827% ( 2) 00:26:40.460 19.316 - 19.433: 87.7590% ( 6) 00:26:40.460 19.549 - 19.665: 87.7844% ( 2) 00:26:40.460 19.782 - 19.898: 87.7971% ( 1) 00:26:40.460 19.898 - 20.015: 87.8098% ( 1) 00:26:40.460 20.015 - 20.131: 87.8353% ( 2) 00:26:40.460 20.131 - 20.247: 87.8480% ( 1) 00:26:40.460 20.247 - 20.364: 87.8734% ( 2) 00:26:40.460 20.364 - 20.480: 87.8861% ( 1) 00:26:40.460 20.480 - 20.596: 87.8988% ( 1) 00:26:40.460 20.596 - 20.713: 87.9497% ( 4) 00:26:40.460 20.713 - 20.829: 87.9624% ( 1) 00:26:40.460 20.829 - 20.945: 87.9751% ( 1) 00:26:40.460 20.945 - 21.062: 88.0259% ( 4) 00:26:40.460 21.062 - 21.178: 88.0386% ( 1) 00:26:40.460 21.295 - 21.411: 88.0514% ( 1) 00:26:40.460 21.644 - 21.760: 88.0641% ( 1) 00:26:40.460 22.109 - 22.225: 88.0895% ( 2) 00:26:40.460 22.225 - 22.342: 88.1912% ( 8) 00:26:40.460 22.342 - 22.458: 88.2802% ( 7) 00:26:40.460 22.458 - 22.575: 88.5471% ( 21) 00:26:40.460 22.575 - 22.691: 88.9030% ( 28) 00:26:40.460 22.691 - 22.807: 89.2844% ( 30) 00:26:40.460 22.807 - 22.924: 89.8945% ( 48) 00:26:40.460 22.924 - 23.040: 90.4157% ( 41) 00:26:40.460 23.040 - 23.156: 91.0004% ( 46) 00:26:40.460 23.156 - 23.273: 91.4707% ( 37) 00:26:40.460 23.273 - 23.389: 91.9664% ( 39) 00:26:40.460 23.389 - 23.505: 92.7164% ( 59) 00:26:40.460 23.505 - 23.622: 93.5554% ( 66) 00:26:40.460 23.622 - 23.738: 94.2418% ( 54) 00:26:40.460 23.738 - 23.855: 94.7375% ( 39) 00:26:40.460 23.855 - 23.971: 95.1189% ( 30) 00:26:40.460 23.971 - 24.087: 95.4239% ( 24) 00:26:40.460 24.087 - 24.204: 95.7163% ( 23) 00:26:40.460 24.204 - 24.320: 95.8688% ( 12) 00:26:40.460 24.320 - 24.436: 96.0086% ( 11) 00:26:40.460 24.436 - 24.553: 96.1358% ( 10) 00:26:40.460 24.553 - 24.669: 96.1993% ( 5) 00:26:40.460 24.669 - 24.785: 96.2883% ( 7) 00:26:40.460 24.785 - 24.902: 96.3264% ( 3) 00:26:40.460 24.902 - 25.018: 96.3900% ( 5) 00:26:40.460 25.018 - 25.135: 96.4281% ( 3) 00:26:40.460 25.135 - 25.251: 96.4790% ( 4) 00:26:40.460 25.251 - 25.367: 96.5171% ( 3) 00:26:40.460 25.367 - 25.484: 96.5298% ( 1) 00:26:40.460 25.484 - 25.600: 96.5425% ( 1) 00:26:40.460 25.600 - 25.716: 96.5934% ( 4) 00:26:40.460 25.716 - 25.833: 96.6315% ( 3) 00:26:40.460 25.833 - 25.949: 96.6696% ( 3) 00:26:40.460 25.949 - 26.065: 96.6951% ( 2) 00:26:40.460 26.065 - 26.182: 96.7205% ( 2) 00:26:40.460 26.182 - 26.298: 96.7332% ( 1) 00:26:40.460 26.298 - 26.415: 96.7459% ( 1) 00:26:40.460 26.415 - 26.531: 96.7713% ( 2) 00:26:40.460 26.531 - 26.647: 96.7840% ( 1) 00:26:40.460 26.647 - 26.764: 96.7967% ( 1) 00:26:40.460 27.345 - 27.462: 96.8222% ( 2) 00:26:40.460 27.578 - 27.695: 96.8349% ( 1) 00:26:40.460 27.695 - 27.811: 96.9239% ( 7) 00:26:40.460 27.811 - 27.927: 96.9874% ( 5) 00:26:40.460 27.927 - 28.044: 97.0764% ( 7) 00:26:40.460 28.044 - 28.160: 97.2162% ( 11) 00:26:40.460 28.160 - 28.276: 97.2925% ( 6) 00:26:40.460 28.276 - 28.393: 97.4323% ( 11) 00:26:40.460 28.393 - 28.509: 97.5594% ( 10) 00:26:40.460 28.509 - 28.625: 97.7628% ( 16) 00:26:40.460 28.625 - 28.742: 97.8899% ( 10) 00:26:40.460 28.742 - 28.858: 98.0297% ( 11) 00:26:40.460 28.858 - 28.975: 98.1950% ( 13) 00:26:40.460 28.975 - 29.091: 98.3221% ( 10) 00:26:40.460 29.091 - 29.207: 98.3730% ( 4) 00:26:40.460 29.207 - 29.324: 98.4492% ( 6) 00:26:40.460 29.324 - 29.440: 98.5382% ( 7) 00:26:40.460 29.440 - 29.556: 98.6399% ( 8) 00:26:40.460 29.556 - 29.673: 98.6907% ( 4) 00:26:40.460 29.673 - 29.789: 98.7289% ( 3) 00:26:40.460 29.789 - 30.022: 98.8051% ( 6) 00:26:40.460 30.022 - 30.255: 98.8941% ( 7) 00:26:40.460 30.255 - 30.487: 98.9322% ( 3) 00:26:40.460 30.487 - 30.720: 98.9577% ( 2) 00:26:40.460 30.720 - 30.953: 99.0085% ( 4) 00:26:40.460 30.953 - 31.185: 99.0339% ( 2) 00:26:40.460 31.185 - 31.418: 99.1102% ( 6) 00:26:40.460 31.418 - 31.651: 99.1611% ( 4) 00:26:40.460 31.884 - 32.116: 99.1865% ( 2) 00:26:40.460 32.116 - 32.349: 99.1992% ( 1) 00:26:40.460 32.349 - 32.582: 99.2119% ( 1) 00:26:40.460 33.513 - 33.745: 99.2246% ( 1) 00:26:40.460 33.978 - 34.211: 99.2373% ( 1) 00:26:40.460 34.211 - 34.444: 99.2500% ( 1) 00:26:40.460 34.444 - 34.676: 99.2755% ( 2) 00:26:40.460 34.676 - 34.909: 99.3009% ( 2) 00:26:40.460 35.375 - 35.607: 99.3390% ( 3) 00:26:40.460 35.607 - 35.840: 99.3899% ( 4) 00:26:40.460 35.840 - 36.073: 99.4026% ( 1) 00:26:40.460 36.073 - 36.305: 99.4153% ( 1) 00:26:40.460 36.305 - 36.538: 99.4280% ( 1) 00:26:40.460 36.771 - 37.004: 99.4534% ( 2) 00:26:40.460 37.004 - 37.236: 99.4661% ( 1) 00:26:40.460 37.702 - 37.935: 99.4915% ( 2) 00:26:40.460 37.935 - 38.167: 99.5170% ( 2) 00:26:40.460 38.400 - 38.633: 99.5424% ( 2) 00:26:40.460 39.331 - 39.564: 99.5551% ( 1) 00:26:40.460 40.029 - 40.262: 99.5678% ( 1) 00:26:40.460 40.262 - 40.495: 99.5805% ( 1) 00:26:40.460 40.495 - 40.727: 99.5932% ( 1) 00:26:40.460 40.727 - 40.960: 99.6187% ( 2) 00:26:40.460 40.960 - 41.193: 99.6314% ( 1) 00:26:40.460 41.425 - 41.658: 99.6695% ( 3) 00:26:40.460 42.822 - 43.055: 99.6822% ( 1) 00:26:40.460 43.055 - 43.287: 99.6949% ( 1) 00:26:40.460 43.753 - 43.985: 99.7076% ( 1) 00:26:40.460 44.451 - 44.684: 99.7331% ( 2) 00:26:40.460 44.684 - 44.916: 99.7458% ( 1) 00:26:40.460 44.916 - 45.149: 99.7585% ( 1) 00:26:40.460 45.382 - 45.615: 99.7712% ( 1) 00:26:40.460 45.847 - 46.080: 99.7966% ( 2) 00:26:40.460 46.545 - 46.778: 99.8093% ( 1) 00:26:40.460 47.709 - 47.942: 99.8220% ( 1) 00:26:40.460 48.407 - 48.640: 99.8348% ( 1) 00:26:40.460 52.131 - 52.364: 99.8475% ( 1) 00:26:40.460 52.596 - 52.829: 99.8729% ( 2) 00:26:40.460 53.993 - 54.225: 99.8856% ( 1) 00:26:40.460 54.691 - 54.924: 99.8983% ( 1) 00:26:40.460 54.924 - 55.156: 99.9237% ( 2) 00:26:40.460 55.622 - 55.855: 99.9364% ( 1) 00:26:40.460 59.578 - 60.044: 99.9492% ( 1) 00:26:40.460 65.629 - 66.095: 99.9619% ( 1) 00:26:40.460 67.956 - 68.422: 99.9746% ( 1) 00:26:40.460 81.920 - 82.385: 99.9873% ( 1) 00:26:40.460 997.935 - 1005.382: 100.0000% ( 1) 00:26:40.460 00:26:40.460 00:26:40.460 real 0m1.344s 00:26:40.460 user 0m1.120s 00:26:40.460 sys 0m0.141s 00:26:40.461 21:09:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.461 21:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:40.461 ************************************ 00:26:40.461 END TEST nvme_overhead 00:26:40.461 ************************************ 00:26:40.461 21:09:08 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:26:40.461 21:09:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:40.461 21:09:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:40.461 21:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:40.461 ************************************ 00:26:40.461 START TEST nvme_arbitration 00:26:40.461 ************************************ 00:26:40.461 21:09:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:26:44.651 Initializing NVMe Controllers 00:26:44.651 Attached to 0000:00:06.0 00:26:44.651 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:26:44.651 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:26:44.651 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:26:44.651 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:26:44.651 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:26:44.651 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:26:44.651 Initialization complete. Launching workers. 00:26:44.651 Starting thread on core 1 with urgent priority queue 00:26:44.651 Starting thread on core 2 with urgent priority queue 00:26:44.651 Starting thread on core 0 with urgent priority queue 00:26:44.651 Starting thread on core 3 with urgent priority queue 00:26:44.651 QEMU NVMe Ctrl (12340 ) core 0: 1770.67 IO/s 56.48 secs/100000 ios 00:26:44.651 QEMU NVMe Ctrl (12340 ) core 1: 1301.33 IO/s 76.84 secs/100000 ios 00:26:44.651 QEMU NVMe Ctrl (12340 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:26:44.651 QEMU NVMe Ctrl (12340 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:26:44.651 ======================================================== 00:26:44.651 00:26:44.651 00:26:44.651 real 0m3.515s 00:26:44.651 user 0m9.529s 00:26:44.651 sys 0m0.137s 00:26:44.651 21:09:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.651 21:09:11 -- common/autotest_common.sh@10 -- # set +x 00:26:44.651 ************************************ 00:26:44.651 END TEST nvme_arbitration 00:26:44.651 ************************************ 00:26:44.651 21:09:12 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:26:44.651 21:09:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:44.651 21:09:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.651 21:09:12 -- common/autotest_common.sh@10 -- # set +x 00:26:44.651 ************************************ 00:26:44.651 START TEST nvme_single_aen 00:26:44.651 ************************************ 00:26:44.652 21:09:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:26:44.652 [2024-06-09 21:09:12.069089] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:44.652 [2024-06-09 21:09:12.069207] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.652 [2024-06-09 21:09:12.265315] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:44.652 Asynchronous Event Request test 00:26:44.652 Attached to 0000:00:06.0 00:26:44.652 Reset controller to setup AER completions for this process 00:26:44.652 Registering asynchronous event callbacks... 00:26:44.652 Getting orig temperature thresholds of all controllers 00:26:44.652 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:44.652 Setting all controllers temperature threshold low to trigger AER 00:26:44.652 Waiting for all controllers temperature threshold to be set lower 00:26:44.652 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:44.652 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:26:44.652 Waiting for all controllers to trigger AER and reset threshold 00:26:44.652 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:44.652 Cleaning up... 00:26:44.652 00:26:44.652 real 0m0.294s 00:26:44.652 user 0m0.113s 00:26:44.652 sys 0m0.113s 00:26:44.652 21:09:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.652 21:09:12 -- common/autotest_common.sh@10 -- # set +x 00:26:44.652 ************************************ 00:26:44.652 END TEST nvme_single_aen 00:26:44.652 ************************************ 00:26:44.652 21:09:12 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:26:44.652 21:09:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:44.652 21:09:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.652 21:09:12 -- common/autotest_common.sh@10 -- # set +x 00:26:44.652 ************************************ 00:26:44.652 START TEST nvme_doorbell_aers 00:26:44.652 ************************************ 00:26:44.652 21:09:12 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:26:44.652 21:09:12 -- nvme/nvme.sh@70 -- # bdfs=() 00:26:44.652 21:09:12 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:26:44.652 21:09:12 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:26:44.652 21:09:12 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:26:44.652 21:09:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:44.652 21:09:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:26:44.652 21:09:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:44.652 21:09:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:44.652 21:09:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:44.652 21:09:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:26:44.652 21:09:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:26:44.652 21:09:12 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:26:44.652 21:09:12 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:44.652 [2024-06-09 21:09:12.670745] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133339) is not found. Dropping the request. 00:26:54.625 Executing: test_write_invalid_db 00:26:54.625 Waiting for AER completion... 00:26:54.625 Failure: test_write_invalid_db 00:26:54.625 00:26:54.625 Executing: test_invalid_db_write_overflow_sq 00:26:54.625 Waiting for AER completion... 00:26:54.625 Failure: test_invalid_db_write_overflow_sq 00:26:54.625 00:26:54.625 Executing: test_invalid_db_write_overflow_cq 00:26:54.625 Waiting for AER completion... 00:26:54.625 Failure: test_invalid_db_write_overflow_cq 00:26:54.625 00:26:54.625 00:26:54.625 real 0m10.094s 00:26:54.625 user 0m8.459s 00:26:54.625 sys 0m1.601s 00:26:54.625 21:09:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.625 21:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:54.625 ************************************ 00:26:54.625 END TEST nvme_doorbell_aers 00:26:54.625 ************************************ 00:26:54.625 21:09:22 -- nvme/nvme.sh@97 -- # uname 00:26:54.625 21:09:22 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:26:54.625 21:09:22 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:26:54.625 21:09:22 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:26:54.625 21:09:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:54.625 21:09:22 -- common/autotest_common.sh@10 -- # set +x 00:26:54.625 ************************************ 00:26:54.625 START TEST nvme_multi_aen 00:26:54.625 ************************************ 00:26:54.625 21:09:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:26:54.625 [2024-06-09 21:09:22.552438] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:54.625 [2024-06-09 21:09:22.552597] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.625 [2024-06-09 21:09:22.761673] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:54.625 [2024-06-09 21:09:22.761727] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133339) is not found. Dropping the request. 00:26:54.625 [2024-06-09 21:09:22.761823] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133339) is not found. Dropping the request. 00:26:54.625 [2024-06-09 21:09:22.761849] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133339) is not found. Dropping the request. 00:26:54.625 [2024-06-09 21:09:22.765436] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:54.625 Child process pid: 133543 00:26:54.625 [2024-06-09 21:09:22.765599] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.883 [Child] Asynchronous Event Request test 00:26:54.883 [Child] Attached to 0000:00:06.0 00:26:54.883 [Child] Registering asynchronous event callbacks... 00:26:54.883 [Child] Getting orig temperature thresholds of all controllers 00:26:54.883 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:54.883 [Child] Waiting for all controllers to trigger AER and reset threshold 00:26:54.883 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:54.883 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:54.883 [Child] Cleaning up... 00:26:55.142 Asynchronous Event Request test 00:26:55.142 Attached to 0000:00:06.0 00:26:55.142 Reset controller to setup AER completions for this process 00:26:55.142 Registering asynchronous event callbacks... 00:26:55.142 Getting orig temperature thresholds of all controllers 00:26:55.142 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:55.142 Setting all controllers temperature threshold low to trigger AER 00:26:55.142 Waiting for all controllers temperature threshold to be set lower 00:26:55.142 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:55.142 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:26:55.142 Waiting for all controllers to trigger AER and reset threshold 00:26:55.142 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:55.142 Cleaning up... 00:26:55.142 00:26:55.142 real 0m0.589s 00:26:55.142 user 0m0.181s 00:26:55.142 sys 0m0.231s 00:26:55.142 21:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.142 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:55.142 ************************************ 00:26:55.142 END TEST nvme_multi_aen 00:26:55.142 ************************************ 00:26:55.142 21:09:23 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:26:55.142 21:09:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:26:55.142 21:09:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.142 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:55.142 ************************************ 00:26:55.142 START TEST nvme_startup 00:26:55.142 ************************************ 00:26:55.142 21:09:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:26:55.400 Initializing NVMe Controllers 00:26:55.400 Attached to 0000:00:06.0 00:26:55.400 Initialization complete. 00:26:55.400 Time used:206654.219 (us). 00:26:55.400 00:26:55.400 real 0m0.305s 00:26:55.400 user 0m0.121s 00:26:55.400 sys 0m0.113s 00:26:55.400 21:09:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.400 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:55.400 ************************************ 00:26:55.400 END TEST nvme_startup 00:26:55.400 ************************************ 00:26:55.400 21:09:23 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:26:55.400 21:09:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:55.400 21:09:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.400 21:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:55.400 ************************************ 00:26:55.400 START TEST nvme_multi_secondary 00:26:55.400 ************************************ 00:26:55.400 21:09:23 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:26:55.400 21:09:23 -- nvme/nvme.sh@52 -- # pid0=133608 00:26:55.400 21:09:23 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:26:55.400 21:09:23 -- nvme/nvme.sh@54 -- # pid1=133610 00:26:55.400 21:09:23 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:26:55.400 21:09:23 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:26:59.586 Initializing NVMe Controllers 00:26:59.586 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:59.586 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:26:59.586 Initialization complete. Launching workers. 00:26:59.586 ======================================================== 00:26:59.586 Latency(us) 00:26:59.586 Device Information : IOPS MiB/s Average min max 00:26:59.586 PCIE (0000:00:06.0) NSID 1 from core 2: 14761.97 57.66 1083.27 161.12 20675.30 00:26:59.586 ======================================================== 00:26:59.586 Total : 14761.97 57.66 1083.27 161.12 20675.30 00:26:59.586 00:26:59.586 21:09:26 -- nvme/nvme.sh@56 -- # wait 133608 00:26:59.586 Initializing NVMe Controllers 00:26:59.586 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:59.586 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:26:59.586 Initialization complete. Launching workers. 00:26:59.586 ======================================================== 00:26:59.586 Latency(us) 00:26:59.586 Device Information : IOPS MiB/s Average min max 00:26:59.586 PCIE (0000:00:06.0) NSID 1 from core 1: 36141.00 141.18 442.40 133.62 1323.67 00:26:59.586 ======================================================== 00:26:59.586 Total : 36141.00 141.18 442.40 133.62 1323.67 00:26:59.586 00:27:00.983 Initializing NVMe Controllers 00:27:00.983 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:00.983 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:00.983 Initialization complete. Launching workers. 00:27:00.983 ======================================================== 00:27:00.983 Latency(us) 00:27:00.983 Device Information : IOPS MiB/s Average min max 00:27:00.983 PCIE (0000:00:06.0) NSID 1 from core 0: 43701.83 170.71 365.80 99.77 1149.89 00:27:00.983 ======================================================== 00:27:00.983 Total : 43701.83 170.71 365.80 99.77 1149.89 00:27:00.983 00:27:00.983 21:09:29 -- nvme/nvme.sh@57 -- # wait 133610 00:27:00.983 21:09:29 -- nvme/nvme.sh@61 -- # pid0=133683 00:27:00.983 21:09:29 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:27:00.983 21:09:29 -- nvme/nvme.sh@63 -- # pid1=133684 00:27:00.983 21:09:29 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:27:00.983 21:09:29 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:27:04.267 Initializing NVMe Controllers 00:27:04.267 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:04.267 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:27:04.267 Initialization complete. Launching workers. 00:27:04.267 ======================================================== 00:27:04.267 Latency(us) 00:27:04.267 Device Information : IOPS MiB/s Average min max 00:27:04.267 PCIE (0000:00:06.0) NSID 1 from core 1: 36018.61 140.70 443.86 101.87 1298.40 00:27:04.267 ======================================================== 00:27:04.267 Total : 36018.61 140.70 443.86 101.87 1298.40 00:27:04.267 00:27:04.526 Initializing NVMe Controllers 00:27:04.526 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:04.526 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:27:04.526 Initialization complete. Launching workers. 00:27:04.526 ======================================================== 00:27:04.526 Latency(us) 00:27:04.526 Device Information : IOPS MiB/s Average min max 00:27:04.526 PCIE (0000:00:06.0) NSID 1 from core 0: 37299.67 145.70 428.65 100.30 1357.87 00:27:04.526 ======================================================== 00:27:04.526 Total : 37299.67 145.70 428.65 100.30 1357.87 00:27:04.526 00:27:06.427 Initializing NVMe Controllers 00:27:06.427 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:27:06.427 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:27:06.427 Initialization complete. Launching workers. 00:27:06.427 ======================================================== 00:27:06.427 Latency(us) 00:27:06.427 Device Information : IOPS MiB/s Average min max 00:27:06.427 PCIE (0000:00:06.0) NSID 1 from core 2: 18166.12 70.96 880.12 117.58 21011.16 00:27:06.427 ======================================================== 00:27:06.427 Total : 18166.12 70.96 880.12 117.58 21011.16 00:27:06.427 00:27:06.427 21:09:34 -- nvme/nvme.sh@65 -- # wait 133683 00:27:06.427 21:09:34 -- nvme/nvme.sh@66 -- # wait 133684 00:27:06.427 00:27:06.427 real 0m11.079s 00:27:06.427 user 0m18.825s 00:27:06.427 sys 0m0.699s 00:27:06.427 21:09:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.427 21:09:34 -- common/autotest_common.sh@10 -- # set +x 00:27:06.427 ************************************ 00:27:06.427 END TEST nvme_multi_secondary 00:27:06.427 ************************************ 00:27:06.687 21:09:34 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:27:06.687 21:09:34 -- nvme/nvme.sh@102 -- # kill_stub 00:27:06.687 21:09:34 -- common/autotest_common.sh@1065 -- # [[ -e /proc/132901 ]] 00:27:06.687 21:09:34 -- common/autotest_common.sh@1066 -- # kill 132901 00:27:06.687 21:09:34 -- common/autotest_common.sh@1067 -- # wait 132901 00:27:07.254 [2024-06-09 21:09:35.180298] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133542) is not found. Dropping the request. 00:27:07.254 [2024-06-09 21:09:35.180552] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133542) is not found. Dropping the request. 00:27:07.254 [2024-06-09 21:09:35.180790] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133542) is not found. Dropping the request. 00:27:07.254 [2024-06-09 21:09:35.180976] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 133542) is not found. Dropping the request. 00:27:07.254 21:09:35 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:27:07.254 21:09:35 -- common/autotest_common.sh@1073 -- # echo 2 00:27:07.254 21:09:35 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:27:07.254 21:09:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:07.254 21:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:07.254 21:09:35 -- common/autotest_common.sh@10 -- # set +x 00:27:07.513 ************************************ 00:27:07.513 START TEST bdev_nvme_reset_stuck_adm_cmd 00:27:07.513 ************************************ 00:27:07.513 21:09:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:27:07.513 * Looking for test storage... 00:27:07.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:27:07.513 21:09:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:07.513 21:09:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:07.513 21:09:35 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:07.513 21:09:35 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:07.513 21:09:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:07.513 21:09:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:07.513 21:09:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:07.513 21:09:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:07.513 21:09:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:07.513 21:09:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:07.513 21:09:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:07.513 21:09:35 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=133833 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 133833 00:27:07.513 21:09:35 -- common/autotest_common.sh@819 -- # '[' -z 133833 ']' 00:27:07.513 21:09:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:27:07.513 21:09:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.513 21:09:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:07.513 21:09:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.513 21:09:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:07.513 21:09:35 -- common/autotest_common.sh@10 -- # set +x 00:27:07.513 [2024-06-09 21:09:35.649562] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:07.513 [2024-06-09 21:09:35.649780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133833 ] 00:27:07.772 [2024-06-09 21:09:35.858948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.031 [2024-06-09 21:09:36.069876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:08.031 [2024-06-09 21:09:36.070308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.031 [2024-06-09 21:09:36.070447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.031 [2024-06-09 21:09:36.070567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.031 [2024-06-09 21:09:36.070571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.408 21:09:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:09.408 21:09:37 -- common/autotest_common.sh@852 -- # return 0 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:27:09.408 21:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.408 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:27:09.408 nvme0n1 00:27:09.408 21:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_OA2so.txt 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:27:09.408 21:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:09.408 21:09:37 -- common/autotest_common.sh@10 -- # set +x 00:27:09.408 true 00:27:09.408 21:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1717967377 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=133874 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:09.408 21:09:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:11.312 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.312 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:11.312 [2024-06-09 21:09:39.336126] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:11.312 [2024-06-09 21:09:39.336503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:11.312 [2024-06-09 21:09:39.336583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:11.312 [2024-06-09 21:09:39.336610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.312 [2024-06-09 21:09:39.338695] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:11.312 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 133874 00:27:11.312 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 133874 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 133874 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.312 21:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:11.312 21:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:11.312 21:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_OA2so.txt 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_OA2so.txt 00:27:11.312 21:09:39 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 133833 00:27:11.312 21:09:39 -- common/autotest_common.sh@926 -- # '[' -z 133833 ']' 00:27:11.312 21:09:39 -- common/autotest_common.sh@930 -- # kill -0 133833 00:27:11.312 21:09:39 -- common/autotest_common.sh@931 -- # uname 00:27:11.312 21:09:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:11.312 21:09:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133833 00:27:11.312 21:09:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:11.312 21:09:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:11.312 21:09:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133833' 00:27:11.312 killing process with pid 133833 00:27:11.312 21:09:39 -- common/autotest_common.sh@945 -- # kill 133833 00:27:11.312 21:09:39 -- common/autotest_common.sh@950 -- # wait 133833 00:27:13.216 21:09:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:27:13.216 21:09:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:27:13.216 00:27:13.216 real 0m5.901s 00:27:13.216 user 0m20.961s 00:27:13.216 sys 0m0.679s 00:27:13.216 21:09:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.216 ************************************ 00:27:13.216 END TEST bdev_nvme_reset_stuck_adm_cmd 00:27:13.216 ************************************ 00:27:13.216 21:09:41 -- common/autotest_common.sh@10 -- # set +x 00:27:13.216 21:09:41 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:27:13.216 21:09:41 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:27:13.216 21:09:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.216 21:09:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.216 21:09:41 -- common/autotest_common.sh@10 -- # set +x 00:27:13.475 ************************************ 00:27:13.475 START TEST nvme_fio 00:27:13.475 ************************************ 00:27:13.475 21:09:41 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:27:13.475 21:09:41 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:27:13.475 21:09:41 -- nvme/nvme.sh@32 -- # ran_fio=false 00:27:13.475 21:09:41 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:27:13.475 21:09:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:13.475 21:09:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:13.475 21:09:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:13.475 21:09:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:13.475 21:09:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:13.475 21:09:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:13.475 21:09:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:13.475 21:09:41 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:27:13.475 21:09:41 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:27:13.475 21:09:41 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:27:13.475 21:09:41 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:27:13.475 21:09:41 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:27:13.734 21:09:41 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:27:13.734 21:09:41 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:27:13.994 21:09:41 -- nvme/nvme.sh@41 -- # bs=4096 00:27:13.994 21:09:41 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:27:13.994 21:09:41 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:27:13.994 21:09:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:13.994 21:09:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:13.994 21:09:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:13.994 21:09:41 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:13.994 21:09:41 -- common/autotest_common.sh@1320 -- # shift 00:27:13.994 21:09:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:13.994 21:09:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:13.994 21:09:41 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:27:13.994 21:09:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:13.994 21:09:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:13.994 21:09:41 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:27:13.994 21:09:41 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:27:13.994 21:09:41 -- common/autotest_common.sh@1326 -- # break 00:27:13.994 21:09:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:27:13.994 21:09:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:27:13.994 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:13.994 fio-3.35 00:27:13.994 Starting 1 thread 00:27:17.290 00:27:17.291 test: (groupid=0, jobs=1): err= 0: pid=134023: Sun Jun 9 21:09:44 2024 00:27:17.291 read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec) 00:27:17.291 slat (usec): min=3, max=106, avg= 5.99, stdev= 3.55 00:27:17.291 clat (usec): min=217, max=8446, avg=4031.84, stdev=302.26 00:27:17.291 lat (usec): min=222, max=8553, avg=4037.83, stdev=302.61 00:27:17.291 clat percentiles (usec): 00:27:17.291 | 1.00th=[ 3490], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3818], 00:27:17.291 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4080], 00:27:17.291 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4424], 00:27:17.291 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5997], 99.95th=[ 7308], 00:27:17.291 | 99.99th=[ 8356] 00:27:17.291 bw ( KiB/s): min=59576, max=64096, per=99.06%, avg=62496.00, stdev=2532.67, samples=3 00:27:17.291 iops : min=14894, max=16024, avg=15624.00, stdev=633.17, samples=3 00:27:17.291 write: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec); 0 zone resets 00:27:17.291 slat (nsec): min=3768, max=52715, avg=6414.13, stdev=3795.97 00:27:17.291 clat (usec): min=294, max=8353, avg=4049.79, stdev=306.13 00:27:17.291 lat (usec): min=300, max=8373, avg=4056.21, stdev=306.40 00:27:17.291 clat percentiles (usec): 00:27:17.291 | 1.00th=[ 3490], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3851], 00:27:17.291 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4080], 00:27:17.291 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:27:17.291 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 6390], 99.95th=[ 7373], 00:27:17.291 | 99.99th=[ 8225] 00:27:17.291 bw ( KiB/s): min=59872, max=63312, per=98.24%, avg=62029.33, stdev=1879.41, samples=3 00:27:17.291 iops : min=14968, max=15828, avg=15507.33, stdev=469.85, samples=3 00:27:17.291 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:17.291 lat (msec) : 2=0.05%, 4=45.35%, 10=54.55% 00:27:17.291 cpu : usr=99.95%, sys=0.00%, ctx=4, majf=0, minf=37 00:27:17.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:17.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:17.291 issued rwts: total=31560,31585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:17.291 00:27:17.291 Run status group 0 (all jobs): 00:27:17.291 READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec 00:27:17.291 WRITE: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:27:17.291 ----------------------------------------------------- 00:27:17.291 Suppressions used: 00:27:17.291 count bytes template 00:27:17.291 1 32 /usr/src/fio/parse.c 00:27:17.291 ----------------------------------------------------- 00:27:17.291 00:27:17.291 21:09:45 -- nvme/nvme.sh@44 -- # ran_fio=true 00:27:17.291 21:09:45 -- nvme/nvme.sh@46 -- # true 00:27:17.291 00:27:17.291 real 0m3.862s 00:27:17.291 user 0m3.212s 00:27:17.291 sys 0m0.321s 00:27:17.291 21:09:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.291 21:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:17.291 ************************************ 00:27:17.291 END TEST nvme_fio 00:27:17.291 ************************************ 00:27:17.291 00:27:17.291 real 0m47.301s 00:27:17.291 user 2m6.424s 00:27:17.291 sys 0m8.242s 00:27:17.291 21:09:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.291 21:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:17.291 ************************************ 00:27:17.291 END TEST nvme 00:27:17.291 ************************************ 00:27:17.291 21:09:45 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:27:17.291 21:09:45 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:17.291 21:09:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.291 21:09:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.291 21:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:17.291 ************************************ 00:27:17.291 START TEST nvme_scc 00:27:17.291 ************************************ 00:27:17.291 21:09:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:17.291 * Looking for test storage... 00:27:17.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:17.291 21:09:45 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:17.291 21:09:45 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:17.291 21:09:45 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:27:17.291 21:09:45 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:17.291 21:09:45 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.291 21:09:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.291 21:09:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.291 21:09:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.291 21:09:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.291 21:09:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.291 21:09:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.291 21:09:45 -- paths/export.sh@5 -- # export PATH 00:27:17.291 21:09:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:17.291 21:09:45 -- nvme/functions.sh@10 -- # ctrls=() 00:27:17.291 21:09:45 -- nvme/functions.sh@10 -- # declare -A ctrls 00:27:17.291 21:09:45 -- nvme/functions.sh@11 -- # nvmes=() 00:27:17.291 21:09:45 -- nvme/functions.sh@11 -- # declare -A nvmes 00:27:17.291 21:09:45 -- nvme/functions.sh@12 -- # bdfs=() 00:27:17.291 21:09:45 -- nvme/functions.sh@12 -- # declare -A bdfs 00:27:17.291 21:09:45 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:27:17.291 21:09:45 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:27:17.291 21:09:45 -- nvme/functions.sh@14 -- # nvme_name= 00:27:17.291 21:09:45 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:17.291 21:09:45 -- nvme/nvme_scc.sh@12 -- # uname 00:27:17.291 21:09:45 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:27:17.291 21:09:45 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:27:17.291 21:09:45 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:17.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:17.550 Waiting for block devices as requested 00:27:17.811 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:17.811 21:09:45 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:27:17.811 21:09:45 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:27:17.811 21:09:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:17.811 21:09:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:27:17.811 21:09:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:27:17.811 21:09:45 -- scripts/common.sh@15 -- # local i 00:27:17.811 21:09:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:17.811 21:09:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:17.811 21:09:45 -- scripts/common.sh@24 -- # return 0 00:27:17.811 21:09:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:27:17.811 21:09:45 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:27:17.811 21:09:45 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@18 -- # shift 00:27:17.811 21:09:45 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.811 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.811 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:27:17.811 21:09:45 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.812 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.812 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.812 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.813 21:09:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:27:17.813 21:09:45 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.813 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:27:17.814 21:09:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:27:17.814 21:09:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:27:17.814 21:09:45 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:27:17.814 21:09:45 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@18 -- # shift 00:27:17.814 21:09:45 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.814 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.814 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.814 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:17.815 21:09:45 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # IFS=: 00:27:17.815 21:09:45 -- nvme/functions.sh@21 -- # read -r reg val 00:27:17.815 21:09:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:27:17.815 21:09:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:27:17.815 21:09:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:27:17.815 21:09:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:27:17.815 21:09:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:27:17.815 21:09:45 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:27:17.815 21:09:45 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:27:17.815 21:09:45 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:27:17.815 21:09:45 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:27:17.815 21:09:45 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:27:17.815 21:09:45 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:27:17.815 21:09:45 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:27:17.815 21:09:45 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:27:17.815 21:09:45 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:27:17.815 21:09:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:27:17.815 21:09:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:27:17.815 21:09:45 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:27:17.816 21:09:45 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:27:17.816 21:09:45 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:27:17.816 21:09:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:27:17.816 21:09:45 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:27:17.816 21:09:45 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:27:17.816 21:09:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:27:17.816 21:09:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:17.816 21:09:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:27:17.816 21:09:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:27:17.816 21:09:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:27:17.816 21:09:45 -- nvme/functions.sh@197 -- # echo nvme0 00:27:17.816 21:09:45 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:27:17.816 21:09:45 -- nvme/functions.sh@206 -- # echo nvme0 00:27:17.816 21:09:45 -- nvme/functions.sh@207 -- # return 0 00:27:17.816 21:09:45 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:27:17.816 21:09:45 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:27:17.816 21:09:45 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:18.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:18.383 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:19.319 21:09:47 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:27:19.319 21:09:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:19.319 21:09:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:19.319 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:19.319 ************************************ 00:27:19.319 START TEST nvme_simple_copy 00:27:19.319 ************************************ 00:27:19.319 21:09:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:27:19.577 Initializing NVMe Controllers 00:27:19.577 Attaching to 0000:00:06.0 00:27:19.577 Controller supports SCC. Attached to 0000:00:06.0 00:27:19.577 Namespace ID: 1 size: 5GB 00:27:19.577 Initialization complete. 00:27:19.577 00:27:19.577 Controller QEMU NVMe Ctrl (12340 ) 00:27:19.577 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:27:19.577 Namespace Block Size:4096 00:27:19.577 Writing LBAs 0 to 63 with Random Data 00:27:19.577 Copied LBAs from 0 - 63 to the Destination LBA 256 00:27:19.577 LBAs matching Written Data: 64 00:27:19.577 00:27:19.577 real 0m0.312s 00:27:19.577 user 0m0.111s 00:27:19.577 sys 0m0.103s 00:27:19.577 21:09:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.577 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:19.577 ************************************ 00:27:19.577 END TEST nvme_simple_copy 00:27:19.577 ************************************ 00:27:19.836 00:27:19.836 real 0m2.420s 00:27:19.836 user 0m0.707s 00:27:19.836 sys 0m1.616s 00:27:19.836 21:09:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.836 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:19.836 ************************************ 00:27:19.836 END TEST nvme_scc 00:27:19.836 ************************************ 00:27:19.836 21:09:47 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:27:19.836 21:09:47 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:27:19.836 21:09:47 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:27:19.836 21:09:47 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:27:19.836 21:09:47 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:27:19.836 21:09:47 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:27:19.836 21:09:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:19.836 21:09:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:19.836 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:19.836 ************************************ 00:27:19.836 START TEST nvme_rpc 00:27:19.836 ************************************ 00:27:19.836 21:09:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:27:19.836 * Looking for test storage... 00:27:19.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:19.836 21:09:47 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:19.836 21:09:47 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:27:19.836 21:09:47 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:19.836 21:09:47 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:19.836 21:09:47 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:19.836 21:09:47 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:19.836 21:09:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:19.836 21:09:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:19.836 21:09:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:19.836 21:09:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:19.836 21:09:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:19.836 21:09:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:19.836 21:09:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:27:19.836 21:09:47 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:27:19.836 21:09:47 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:27:19.836 21:09:47 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:27:19.836 21:09:47 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=134488 00:27:19.836 21:09:47 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:27:19.836 21:09:47 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 134488 00:27:19.836 21:09:47 -- common/autotest_common.sh@819 -- # '[' -z 134488 ']' 00:27:19.836 21:09:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.836 21:09:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:19.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.836 21:09:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.836 21:09:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:19.836 21:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:19.836 [2024-06-09 21:09:48.005326] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:19.836 [2024-06-09 21:09:48.005561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134488 ] 00:27:20.094 [2024-06-09 21:09:48.177686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:20.352 [2024-06-09 21:09:48.344989] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:20.352 [2024-06-09 21:09:48.345839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.352 [2024-06-09 21:09:48.345847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.730 21:09:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:21.730 21:09:49 -- common/autotest_common.sh@852 -- # return 0 00:27:21.730 21:09:49 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:27:21.988 Nvme0n1 00:27:21.988 21:09:49 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:27:21.988 21:09:49 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:27:22.247 request: 00:27:22.247 { 00:27:22.247 "filename": "non_existing_file", 00:27:22.247 "bdev_name": "Nvme0n1", 00:27:22.247 "method": "bdev_nvme_apply_firmware", 00:27:22.247 "req_id": 1 00:27:22.247 } 00:27:22.247 Got JSON-RPC error response 00:27:22.247 response: 00:27:22.247 { 00:27:22.247 "code": -32603, 00:27:22.247 "message": "open file failed." 00:27:22.247 } 00:27:22.247 21:09:50 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:27:22.247 21:09:50 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:27:22.247 21:09:50 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:22.506 21:09:50 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:27:22.506 21:09:50 -- nvme/nvme_rpc.sh@40 -- # killprocess 134488 00:27:22.506 21:09:50 -- common/autotest_common.sh@926 -- # '[' -z 134488 ']' 00:27:22.506 21:09:50 -- common/autotest_common.sh@930 -- # kill -0 134488 00:27:22.506 21:09:50 -- common/autotest_common.sh@931 -- # uname 00:27:22.506 21:09:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:22.506 21:09:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134488 00:27:22.506 21:09:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:22.506 21:09:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:22.506 21:09:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134488' 00:27:22.506 killing process with pid 134488 00:27:22.506 21:09:50 -- common/autotest_common.sh@945 -- # kill 134488 00:27:22.506 21:09:50 -- common/autotest_common.sh@950 -- # wait 134488 00:27:24.408 00:27:24.408 real 0m4.427s 00:27:24.408 user 0m8.755s 00:27:24.408 sys 0m0.590s 00:27:24.408 ************************************ 00:27:24.408 END TEST nvme_rpc 00:27:24.408 ************************************ 00:27:24.408 21:09:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.408 21:09:52 -- common/autotest_common.sh@10 -- # set +x 00:27:24.408 21:09:52 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:27:24.408 21:09:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:24.408 21:09:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.408 21:09:52 -- common/autotest_common.sh@10 -- # set +x 00:27:24.408 ************************************ 00:27:24.408 START TEST nvme_rpc_timeouts 00:27:24.408 ************************************ 00:27:24.408 21:09:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:27:24.408 * Looking for test storage... 00:27:24.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:24.408 21:09:52 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:24.408 21:09:52 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_134580 00:27:24.408 21:09:52 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_134580 00:27:24.408 21:09:52 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=134610 00:27:24.408 21:09:52 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:27:24.408 21:09:52 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:27:24.408 21:09:52 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 134610 00:27:24.408 21:09:52 -- common/autotest_common.sh@819 -- # '[' -z 134610 ']' 00:27:24.408 21:09:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.408 21:09:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:24.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.408 21:09:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.408 21:09:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:24.408 21:09:52 -- common/autotest_common.sh@10 -- # set +x 00:27:24.408 [2024-06-09 21:09:52.423653] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:24.408 [2024-06-09 21:09:52.423837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134610 ] 00:27:24.667 [2024-06-09 21:09:52.585152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:24.667 [2024-06-09 21:09:52.766369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:24.667 [2024-06-09 21:09:52.767302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.667 [2024-06-09 21:09:52.767316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.042 21:09:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:26.042 21:09:54 -- common/autotest_common.sh@852 -- # return 0 00:27:26.042 21:09:54 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:27:26.042 Checking default timeout settings: 00:27:26.042 21:09:54 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:26.301 21:09:54 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:27:26.301 Making settings changes with rpc: 00:27:26.301 21:09:54 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:27:26.559 21:09:54 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:27:26.559 Check default vs. modified settings: 00:27:26.559 21:09:54 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_134580 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_134580 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:27:27.127 Setting action_on_timeout is changed as expected. 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_134580 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_134580 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:27:27.127 Setting timeout_us is changed as expected. 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_134580 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_134580 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:27:27.127 Setting timeout_admin_us is changed as expected. 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_134580 /tmp/settings_modified_134580 00:27:27.127 21:09:55 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 134610 00:27:27.127 21:09:55 -- common/autotest_common.sh@926 -- # '[' -z 134610 ']' 00:27:27.127 21:09:55 -- common/autotest_common.sh@930 -- # kill -0 134610 00:27:27.127 21:09:55 -- common/autotest_common.sh@931 -- # uname 00:27:27.127 21:09:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:27.127 21:09:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134610 00:27:27.127 21:09:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:27.127 21:09:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:27.127 21:09:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134610' 00:27:27.127 killing process with pid 134610 00:27:27.127 21:09:55 -- common/autotest_common.sh@945 -- # kill 134610 00:27:27.127 21:09:55 -- common/autotest_common.sh@950 -- # wait 134610 00:27:29.061 RPC TIMEOUT SETTING TEST PASSED. 00:27:29.061 21:09:56 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:27:29.061 00:27:29.061 real 0m4.650s 00:27:29.061 user 0m9.247s 00:27:29.061 sys 0m0.641s 00:27:29.061 21:09:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.061 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:27:29.061 ************************************ 00:27:29.061 END TEST nvme_rpc_timeouts 00:27:29.061 ************************************ 00:27:29.061 21:09:56 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:27:29.061 21:09:56 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:27:29.061 21:09:56 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:56 -- spdk/autotest.sh@268 -- # timing_exit lib 00:27:29.061 21:09:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:29.061 21:09:56 -- common/autotest_common.sh@10 -- # set +x 00:27:29.061 21:09:57 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:29.061 21:09:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:29.061 21:09:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:29.061 21:09:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:29.061 21:09:57 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:29.061 21:09:57 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:29.061 21:09:57 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:29.061 21:09:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.061 21:09:57 -- common/autotest_common.sh@10 -- # set +x 00:27:29.061 21:09:57 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:29.061 21:09:57 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:27:29.061 21:09:57 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:27:29.061 21:09:57 -- common/autotest_common.sh@10 -- # set +x 00:27:30.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:30.438 Waiting for block devices as requested 00:27:30.438 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:31.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:31.004 Cleaning 00:27:31.004 Removing: /var/run/dpdk/spdk0/config 00:27:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:31.004 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:31.004 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:31.004 Removing: /dev/shm/spdk_tgt_trace.pid102398 00:27:31.004 Removing: /var/run/dpdk/spdk0 00:27:31.004 Removing: /var/run/dpdk/spdk_pid102156 00:27:31.004 Removing: /var/run/dpdk/spdk_pid102398 00:27:31.004 Removing: /var/run/dpdk/spdk_pid102703 00:27:31.004 Removing: /var/run/dpdk/spdk_pid102953 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103137 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103252 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103359 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103485 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103601 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103649 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103692 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103768 00:27:31.004 Removing: /var/run/dpdk/spdk_pid103898 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104426 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104513 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104599 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104636 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104788 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104823 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104970 00:27:31.004 Removing: /var/run/dpdk/spdk_pid104993 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105062 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105099 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105156 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105195 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105379 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105431 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105479 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105557 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105656 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105694 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105783 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105825 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105877 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105912 00:27:31.004 Removing: /var/run/dpdk/spdk_pid105957 00:27:31.004 Removing: /var/run/dpdk/spdk_pid106007 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106052 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106087 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106139 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106176 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106221 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106256 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106313 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106348 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106395 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106432 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106482 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106517 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106569 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106609 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106654 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106689 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106743 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106778 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106828 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106870 00:27:31.005 Removing: /var/run/dpdk/spdk_pid106915 00:27:31.263 Removing: /var/run/dpdk/spdk_pid106953 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107004 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107039 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107089 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107126 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107178 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107216 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107277 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107316 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107371 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107406 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107457 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107500 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107547 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107639 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107766 00:27:31.263 Removing: /var/run/dpdk/spdk_pid107945 00:27:31.263 Removing: /var/run/dpdk/spdk_pid108035 00:27:31.263 Removing: /var/run/dpdk/spdk_pid108094 00:27:31.263 Removing: /var/run/dpdk/spdk_pid109336 00:27:31.263 Removing: /var/run/dpdk/spdk_pid109557 00:27:31.263 Removing: /var/run/dpdk/spdk_pid109764 00:27:31.263 Removing: /var/run/dpdk/spdk_pid109889 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110033 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110105 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110140 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110171 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110649 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110751 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110872 00:27:31.263 Removing: /var/run/dpdk/spdk_pid110930 00:27:31.263 Removing: /var/run/dpdk/spdk_pid112133 00:27:31.263 Removing: /var/run/dpdk/spdk_pid113027 00:27:31.263 Removing: /var/run/dpdk/spdk_pid113931 00:27:31.263 Removing: /var/run/dpdk/spdk_pid115039 00:27:31.263 Removing: /var/run/dpdk/spdk_pid116113 00:27:31.263 Removing: /var/run/dpdk/spdk_pid117192 00:27:31.263 Removing: /var/run/dpdk/spdk_pid118693 00:27:31.263 Removing: /var/run/dpdk/spdk_pid119919 00:27:31.263 Removing: /var/run/dpdk/spdk_pid121122 00:27:31.263 Removing: /var/run/dpdk/spdk_pid121795 00:27:31.263 Removing: /var/run/dpdk/spdk_pid122331 00:27:31.263 Removing: /var/run/dpdk/spdk_pid122952 00:27:31.263 Removing: /var/run/dpdk/spdk_pid123442 00:27:31.263 Removing: /var/run/dpdk/spdk_pid124010 00:27:31.263 Removing: /var/run/dpdk/spdk_pid124547 00:27:31.263 Removing: /var/run/dpdk/spdk_pid125203 00:27:31.263 Removing: /var/run/dpdk/spdk_pid125722 00:27:31.263 Removing: /var/run/dpdk/spdk_pid126391 00:27:31.263 Removing: /var/run/dpdk/spdk_pid126444 00:27:31.263 Removing: /var/run/dpdk/spdk_pid126500 00:27:31.263 Removing: /var/run/dpdk/spdk_pid126560 00:27:31.263 Removing: /var/run/dpdk/spdk_pid126700 00:27:31.263 Removing: /var/run/dpdk/spdk_pid126853 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127067 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127348 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127372 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127425 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127451 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127480 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127514 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127541 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127565 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127601 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127626 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127654 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127689 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127716 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127742 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127769 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127801 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127829 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127863 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127883 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127916 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127963 00:27:31.263 Removing: /var/run/dpdk/spdk_pid127991 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128036 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128120 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128167 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128188 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128238 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128261 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128283 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128351 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128371 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128414 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128447 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128464 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128490 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128512 00:27:31.263 Removing: /var/run/dpdk/spdk_pid128536 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128564 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128582 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128627 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128682 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128703 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128754 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128781 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128796 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128860 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128893 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128936 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128962 00:27:31.521 Removing: /var/run/dpdk/spdk_pid128986 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129015 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129034 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129058 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129080 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129104 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129200 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129290 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129438 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129473 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129520 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129590 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129628 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129659 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129686 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129730 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129764 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129845 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129912 00:27:31.521 Removing: /var/run/dpdk/spdk_pid129970 00:27:31.521 Removing: /var/run/dpdk/spdk_pid130235 00:27:31.521 Removing: /var/run/dpdk/spdk_pid130360 00:27:31.521 Removing: /var/run/dpdk/spdk_pid130408 00:27:31.521 Removing: /var/run/dpdk/spdk_pid130504 00:27:31.522 Removing: /var/run/dpdk/spdk_pid130588 00:27:31.522 Removing: /var/run/dpdk/spdk_pid130639 00:27:31.522 Removing: /var/run/dpdk/spdk_pid130896 00:27:31.522 Removing: /var/run/dpdk/spdk_pid131072 00:27:31.522 Removing: /var/run/dpdk/spdk_pid131174 00:27:31.522 Removing: /var/run/dpdk/spdk_pid131232 00:27:31.522 Removing: /var/run/dpdk/spdk_pid131260 00:27:31.522 Removing: /var/run/dpdk/spdk_pid131343 00:27:31.522 Removing: /var/run/dpdk/spdk_pid131778 00:27:31.522 Removing: /var/run/dpdk/spdk_pid131829 00:27:31.522 Removing: /var/run/dpdk/spdk_pid132137 00:27:31.522 Removing: /var/run/dpdk/spdk_pid132262 00:27:31.522 Removing: /var/run/dpdk/spdk_pid132370 00:27:31.522 Removing: /var/run/dpdk/spdk_pid132427 00:27:31.522 Removing: /var/run/dpdk/spdk_pid132456 00:27:31.522 Removing: /var/run/dpdk/spdk_pid132495 00:27:31.522 Removing: /var/run/dpdk/spdk_pid133833 00:27:31.522 Removing: /var/run/dpdk/spdk_pid133983 00:27:31.522 Removing: /var/run/dpdk/spdk_pid133987 00:27:31.522 Removing: /var/run/dpdk/spdk_pid134013 00:27:31.522 Removing: /var/run/dpdk/spdk_pid134488 00:27:31.522 Removing: /var/run/dpdk/spdk_pid134610 00:27:31.522 Clean 00:27:31.780 killing process with pid 92410 00:27:31.780 killing process with pid 92411 00:27:31.780 21:09:59 -- common/autotest_common.sh@1436 -- # return 0 00:27:31.780 21:09:59 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:31.780 21:09:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:31.780 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:27:31.780 21:09:59 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:31.780 21:09:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:31.780 21:09:59 -- common/autotest_common.sh@10 -- # set +x 00:27:31.780 21:09:59 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:31.780 21:09:59 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:31.780 21:09:59 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:31.780 21:09:59 -- spdk/autotest.sh@394 -- # hash lcov 00:27:31.780 21:09:59 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:31.780 21:09:59 -- spdk/autotest.sh@396 -- # hostname 00:27:31.780 21:09:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:32.038 geninfo: WARNING: invalid characters removed from testname! 00:28:10.759 21:10:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:14.949 21:10:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:17.480 21:10:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:20.011 21:10:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:23.293 21:10:50 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:25.823 21:10:53 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:28.354 21:10:56 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:28.354 21:10:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:28.354 21:10:56 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:28.354 21:10:56 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.354 21:10:56 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.354 21:10:56 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:28.354 21:10:56 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:28.354 21:10:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:28.354 21:10:56 -- paths/export.sh@5 -- $ export PATH 00:28:28.354 21:10:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:28.354 21:10:56 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:28.354 21:10:56 -- common/autobuild_common.sh@435 -- $ date +%s 00:28:28.354 21:10:56 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717967456.XXXXXX 00:28:28.354 21:10:56 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717967456.qREvyC 00:28:28.354 21:10:56 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:28:28.354 21:10:56 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:28:28.354 21:10:56 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:28.354 21:10:56 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:28.354 21:10:56 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:28.354 21:10:56 -- common/autobuild_common.sh@451 -- $ get_config_params 00:28:28.355 21:10:56 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:28:28.355 21:10:56 -- common/autotest_common.sh@10 -- $ set +x 00:28:28.355 21:10:56 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage' 00:28:28.355 21:10:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:28.355 21:10:56 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:28.355 21:10:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:28.355 21:10:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:28.355 21:10:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:28.355 21:10:56 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:28:28.355 21:10:56 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:28:28.355 21:10:56 -- common/autotest_common.sh@10 -- $ set +x 00:28:28.355 21:10:56 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:28:28.355 21:10:56 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:28:28.355 21:10:56 -- spdk/autopackage.sh@40 -- $ get_config_params 00:28:28.355 21:10:56 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:28:28.355 21:10:56 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:28:28.355 21:10:56 -- common/autotest_common.sh@10 -- $ set +x 00:28:28.355 21:10:56 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage' 00:28:28.355 21:10:56 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --enable-lto 00:28:28.613 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:28.613 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:28:28.871 Using 'verbs' RDMA provider 00:28:41.338 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:28:51.310 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:28:51.310 Creating mk/config.mk...done. 00:28:51.310 Creating mk/cc.flags.mk...done. 00:28:51.310 Type 'make' to build. 00:28:51.310 21:11:18 -- spdk/autopackage.sh@43 -- $ make -j10 00:28:51.310 make[1]: Nothing to be done for 'all'. 00:28:55.495 The Meson build system 00:28:55.495 Version: 1.4.0 00:28:55.495 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:28:55.496 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:28:55.496 Build type: native build 00:28:55.496 Program cat found: YES (/usr/bin/cat) 00:28:55.496 Project name: DPDK 00:28:55.496 Project version: 23.11.0 00:28:55.496 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:28:55.496 C linker for the host machine: cc ld.bfd 2.38 00:28:55.496 Host machine cpu family: x86_64 00:28:55.496 Host machine cpu: x86_64 00:28:55.496 Message: ## Building in Developer Mode ## 00:28:55.496 Program pkg-config found: YES (/usr/bin/pkg-config) 00:28:55.496 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:28:55.496 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:28:55.496 Program python3 found: YES (/usr/bin/python3) 00:28:55.496 Program cat found: YES (/usr/bin/cat) 00:28:55.496 Compiler for C supports arguments -march=native: YES 00:28:55.496 Checking for size of "void *" : 8 00:28:55.496 Checking for size of "void *" : 8 (cached) 00:28:55.496 Library m found: YES 00:28:55.496 Library numa found: YES 00:28:55.496 Has header "numaif.h" : YES 00:28:55.496 Library fdt found: NO 00:28:55.496 Library execinfo found: NO 00:28:55.496 Has header "execinfo.h" : YES 00:28:55.496 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:28:55.496 Run-time dependency libarchive found: NO (tried pkgconfig) 00:28:55.496 Run-time dependency libbsd found: NO (tried pkgconfig) 00:28:55.496 Run-time dependency jansson found: NO (tried pkgconfig) 00:28:55.496 Run-time dependency openssl found: YES 3.0.2 00:28:55.496 Run-time dependency libpcap found: NO (tried pkgconfig) 00:28:55.496 Library pcap found: NO 00:28:55.496 Compiler for C supports arguments -Wcast-qual: YES 00:28:55.496 Compiler for C supports arguments -Wdeprecated: YES 00:28:55.496 Compiler for C supports arguments -Wformat: YES 00:28:55.496 Compiler for C supports arguments -Wformat-nonliteral: YES 00:28:55.496 Compiler for C supports arguments -Wformat-security: YES 00:28:55.496 Compiler for C supports arguments -Wmissing-declarations: YES 00:28:55.496 Compiler for C supports arguments -Wmissing-prototypes: YES 00:28:55.496 Compiler for C supports arguments -Wnested-externs: YES 00:28:55.496 Compiler for C supports arguments -Wold-style-definition: YES 00:28:55.496 Compiler for C supports arguments -Wpointer-arith: YES 00:28:55.496 Compiler for C supports arguments -Wsign-compare: YES 00:28:55.496 Compiler for C supports arguments -Wstrict-prototypes: YES 00:28:55.496 Compiler for C supports arguments -Wundef: YES 00:28:55.496 Compiler for C supports arguments -Wwrite-strings: YES 00:28:55.496 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:28:55.496 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:28:55.496 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:28:55.496 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:28:55.496 Program objdump found: YES (/usr/bin/objdump) 00:28:55.496 Compiler for C supports arguments -mavx512f: YES 00:28:55.496 Checking if "AVX512 checking" compiles: YES 00:28:55.496 Fetching value of define "__SSE4_2__" : 1 00:28:55.496 Fetching value of define "__AES__" : 1 00:28:55.496 Fetching value of define "__AVX__" : 1 00:28:55.496 Fetching value of define "__AVX2__" : 1 00:28:55.496 Fetching value of define "__AVX512BW__" : (undefined) 00:28:55.496 Fetching value of define "__AVX512CD__" : (undefined) 00:28:55.496 Fetching value of define "__AVX512DQ__" : (undefined) 00:28:55.496 Fetching value of define "__AVX512F__" : (undefined) 00:28:55.496 Fetching value of define "__AVX512VL__" : (undefined) 00:28:55.496 Fetching value of define "__PCLMUL__" : 1 00:28:55.496 Fetching value of define "__RDRND__" : 1 00:28:55.496 Fetching value of define "__RDSEED__" : 1 00:28:55.496 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:28:55.496 Fetching value of define "__znver1__" : (undefined) 00:28:55.496 Fetching value of define "__znver2__" : (undefined) 00:28:55.496 Fetching value of define "__znver3__" : (undefined) 00:28:55.496 Fetching value of define "__znver4__" : (undefined) 00:28:55.496 Compiler for C supports arguments -ffat-lto-objects: YES 00:28:55.496 Library asan found: YES 00:28:55.496 Compiler for C supports arguments -Wno-format-truncation: YES 00:28:55.496 Message: lib/log: Defining dependency "log" 00:28:55.496 Message: lib/kvargs: Defining dependency "kvargs" 00:28:55.496 Message: lib/telemetry: Defining dependency "telemetry" 00:28:55.496 Library rt found: YES 00:28:55.496 Checking for function "getentropy" : NO 00:28:55.496 Message: lib/eal: Defining dependency "eal" 00:28:55.496 Message: lib/ring: Defining dependency "ring" 00:28:55.496 Message: lib/rcu: Defining dependency "rcu" 00:28:55.496 Message: lib/mempool: Defining dependency "mempool" 00:28:55.496 Message: lib/mbuf: Defining dependency "mbuf" 00:28:55.496 Fetching value of define "__PCLMUL__" : 1 (cached) 00:28:55.496 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:28:55.496 Compiler for C supports arguments -mpclmul: YES 00:28:55.496 Compiler for C supports arguments -maes: YES 00:28:55.496 Compiler for C supports arguments -mavx512f: YES (cached) 00:28:55.496 Compiler for C supports arguments -mavx512bw: YES 00:28:55.496 Compiler for C supports arguments -mavx512dq: YES 00:28:55.496 Compiler for C supports arguments -mavx512vl: YES 00:28:55.496 Compiler for C supports arguments -mvpclmulqdq: YES 00:28:55.496 Compiler for C supports arguments -mavx2: YES 00:28:55.496 Compiler for C supports arguments -mavx: YES 00:28:55.496 Message: lib/net: Defining dependency "net" 00:28:55.496 Message: lib/meter: Defining dependency "meter" 00:28:55.496 Message: lib/ethdev: Defining dependency "ethdev" 00:28:55.496 Message: lib/pci: Defining dependency "pci" 00:28:55.496 Message: lib/cmdline: Defining dependency "cmdline" 00:28:55.496 Message: lib/hash: Defining dependency "hash" 00:28:55.496 Message: lib/timer: Defining dependency "timer" 00:28:55.496 Message: lib/compressdev: Defining dependency "compressdev" 00:28:55.496 Message: lib/cryptodev: Defining dependency "cryptodev" 00:28:55.496 Message: lib/dmadev: Defining dependency "dmadev" 00:28:55.496 Compiler for C supports arguments -Wno-cast-qual: YES 00:28:55.496 Message: lib/power: Defining dependency "power" 00:28:55.496 Message: lib/reorder: Defining dependency "reorder" 00:28:55.496 Message: lib/security: Defining dependency "security" 00:28:55.496 Has header "linux/userfaultfd.h" : YES 00:28:55.496 Has header "linux/vduse.h" : YES 00:28:55.496 Message: lib/vhost: Defining dependency "vhost" 00:28:55.496 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:28:55.496 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:28:55.496 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:28:55.496 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:28:55.496 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:28:55.496 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:28:55.496 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:28:55.496 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:28:55.496 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:28:55.496 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:28:55.496 Program doxygen found: YES (/usr/bin/doxygen) 00:28:55.496 Configuring doxy-api-html.conf using configuration 00:28:55.496 Configuring doxy-api-man.conf using configuration 00:28:55.496 Program mandb found: YES (/usr/bin/mandb) 00:28:55.496 Program sphinx-build found: NO 00:28:55.496 Configuring rte_build_config.h using configuration 00:28:55.496 Message: 00:28:55.496 ================= 00:28:55.496 Applications Enabled 00:28:55.496 ================= 00:28:55.496 00:28:55.496 apps: 00:28:55.496 00:28:55.496 00:28:55.496 Message: 00:28:55.496 ================= 00:28:55.496 Libraries Enabled 00:28:55.496 ================= 00:28:55.496 00:28:55.496 libs: 00:28:55.496 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:28:55.496 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:28:55.496 cryptodev, dmadev, power, reorder, security, vhost, 00:28:55.496 00:28:55.496 Message: 00:28:55.496 =============== 00:28:55.496 Drivers Enabled 00:28:55.496 =============== 00:28:55.496 00:28:55.496 common: 00:28:55.496 00:28:55.496 bus: 00:28:55.496 pci, vdev, 00:28:55.496 mempool: 00:28:55.496 ring, 00:28:55.496 dma: 00:28:55.496 00:28:55.496 net: 00:28:55.496 00:28:55.496 crypto: 00:28:55.496 00:28:55.496 compress: 00:28:55.496 00:28:55.496 vdpa: 00:28:55.496 00:28:55.496 00:28:55.496 Message: 00:28:55.496 ================= 00:28:55.496 Content Skipped 00:28:55.496 ================= 00:28:55.496 00:28:55.496 apps: 00:28:55.496 dumpcap: explicitly disabled via build config 00:28:55.496 graph: explicitly disabled via build config 00:28:55.496 pdump: explicitly disabled via build config 00:28:55.496 proc-info: explicitly disabled via build config 00:28:55.496 test-acl: explicitly disabled via build config 00:28:55.496 test-bbdev: explicitly disabled via build config 00:28:55.496 test-cmdline: explicitly disabled via build config 00:28:55.496 test-compress-perf: explicitly disabled via build config 00:28:55.496 test-crypto-perf: explicitly disabled via build config 00:28:55.496 test-dma-perf: explicitly disabled via build config 00:28:55.496 test-eventdev: explicitly disabled via build config 00:28:55.497 test-fib: explicitly disabled via build config 00:28:55.497 test-flow-perf: explicitly disabled via build config 00:28:55.497 test-gpudev: explicitly disabled via build config 00:28:55.497 test-mldev: explicitly disabled via build config 00:28:55.497 test-pipeline: explicitly disabled via build config 00:28:55.497 test-pmd: explicitly disabled via build config 00:28:55.497 test-regex: explicitly disabled via build config 00:28:55.497 test-sad: explicitly disabled via build config 00:28:55.497 test-security-perf: explicitly disabled via build config 00:28:55.497 00:28:55.497 libs: 00:28:55.497 metrics: explicitly disabled via build config 00:28:55.497 acl: explicitly disabled via build config 00:28:55.497 bbdev: explicitly disabled via build config 00:28:55.497 bitratestats: explicitly disabled via build config 00:28:55.497 bpf: explicitly disabled via build config 00:28:55.497 cfgfile: explicitly disabled via build config 00:28:55.497 distributor: explicitly disabled via build config 00:28:55.497 efd: explicitly disabled via build config 00:28:55.497 eventdev: explicitly disabled via build config 00:28:55.497 dispatcher: explicitly disabled via build config 00:28:55.497 gpudev: explicitly disabled via build config 00:28:55.497 gro: explicitly disabled via build config 00:28:55.497 gso: explicitly disabled via build config 00:28:55.497 ip_frag: explicitly disabled via build config 00:28:55.497 jobstats: explicitly disabled via build config 00:28:55.497 latencystats: explicitly disabled via build config 00:28:55.497 lpm: explicitly disabled via build config 00:28:55.497 member: explicitly disabled via build config 00:28:55.497 pcapng: explicitly disabled via build config 00:28:55.497 rawdev: explicitly disabled via build config 00:28:55.497 regexdev: explicitly disabled via build config 00:28:55.497 mldev: explicitly disabled via build config 00:28:55.497 rib: explicitly disabled via build config 00:28:55.497 sched: explicitly disabled via build config 00:28:55.497 stack: explicitly disabled via build config 00:28:55.497 ipsec: explicitly disabled via build config 00:28:55.497 pdcp: explicitly disabled via build config 00:28:55.497 fib: explicitly disabled via build config 00:28:55.497 port: explicitly disabled via build config 00:28:55.497 pdump: explicitly disabled via build config 00:28:55.497 table: explicitly disabled via build config 00:28:55.497 pipeline: explicitly disabled via build config 00:28:55.497 graph: explicitly disabled via build config 00:28:55.497 node: explicitly disabled via build config 00:28:55.497 00:28:55.497 drivers: 00:28:55.497 common/cpt: not in enabled drivers build config 00:28:55.497 common/dpaax: not in enabled drivers build config 00:28:55.497 common/iavf: not in enabled drivers build config 00:28:55.497 common/idpf: not in enabled drivers build config 00:28:55.497 common/mvep: not in enabled drivers build config 00:28:55.497 common/octeontx: not in enabled drivers build config 00:28:55.497 bus/auxiliary: not in enabled drivers build config 00:28:55.497 bus/cdx: not in enabled drivers build config 00:28:55.497 bus/dpaa: not in enabled drivers build config 00:28:55.497 bus/fslmc: not in enabled drivers build config 00:28:55.497 bus/ifpga: not in enabled drivers build config 00:28:55.497 bus/platform: not in enabled drivers build config 00:28:55.497 bus/vmbus: not in enabled drivers build config 00:28:55.497 common/cnxk: not in enabled drivers build config 00:28:55.497 common/mlx5: not in enabled drivers build config 00:28:55.497 common/nfp: not in enabled drivers build config 00:28:55.497 common/qat: not in enabled drivers build config 00:28:55.497 common/sfc_efx: not in enabled drivers build config 00:28:55.497 mempool/bucket: not in enabled drivers build config 00:28:55.497 mempool/cnxk: not in enabled drivers build config 00:28:55.497 mempool/dpaa: not in enabled drivers build config 00:28:55.497 mempool/dpaa2: not in enabled drivers build config 00:28:55.497 mempool/octeontx: not in enabled drivers build config 00:28:55.497 mempool/stack: not in enabled drivers build config 00:28:55.497 dma/cnxk: not in enabled drivers build config 00:28:55.497 dma/dpaa: not in enabled drivers build config 00:28:55.497 dma/dpaa2: not in enabled drivers build config 00:28:55.497 dma/hisilicon: not in enabled drivers build config 00:28:55.497 dma/idxd: not in enabled drivers build config 00:28:55.497 dma/ioat: not in enabled drivers build config 00:28:55.497 dma/skeleton: not in enabled drivers build config 00:28:55.497 net/af_packet: not in enabled drivers build config 00:28:55.497 net/af_xdp: not in enabled drivers build config 00:28:55.497 net/ark: not in enabled drivers build config 00:28:55.497 net/atlantic: not in enabled drivers build config 00:28:55.497 net/avp: not in enabled drivers build config 00:28:55.497 net/axgbe: not in enabled drivers build config 00:28:55.497 net/bnx2x: not in enabled drivers build config 00:28:55.497 net/bnxt: not in enabled drivers build config 00:28:55.497 net/bonding: not in enabled drivers build config 00:28:55.497 net/cnxk: not in enabled drivers build config 00:28:55.497 net/cpfl: not in enabled drivers build config 00:28:55.497 net/cxgbe: not in enabled drivers build config 00:28:55.497 net/dpaa: not in enabled drivers build config 00:28:55.497 net/dpaa2: not in enabled drivers build config 00:28:55.497 net/e1000: not in enabled drivers build config 00:28:55.497 net/ena: not in enabled drivers build config 00:28:55.497 net/enetc: not in enabled drivers build config 00:28:55.497 net/enetfec: not in enabled drivers build config 00:28:55.497 net/enic: not in enabled drivers build config 00:28:55.497 net/failsafe: not in enabled drivers build config 00:28:55.497 net/fm10k: not in enabled drivers build config 00:28:55.497 net/gve: not in enabled drivers build config 00:28:55.497 net/hinic: not in enabled drivers build config 00:28:55.497 net/hns3: not in enabled drivers build config 00:28:55.497 net/i40e: not in enabled drivers build config 00:28:55.497 net/iavf: not in enabled drivers build config 00:28:55.497 net/ice: not in enabled drivers build config 00:28:55.497 net/idpf: not in enabled drivers build config 00:28:55.497 net/igc: not in enabled drivers build config 00:28:55.497 net/ionic: not in enabled drivers build config 00:28:55.497 net/ipn3ke: not in enabled drivers build config 00:28:55.497 net/ixgbe: not in enabled drivers build config 00:28:55.497 net/mana: not in enabled drivers build config 00:28:55.497 net/memif: not in enabled drivers build config 00:28:55.497 net/mlx4: not in enabled drivers build config 00:28:55.497 net/mlx5: not in enabled drivers build config 00:28:55.497 net/mvneta: not in enabled drivers build config 00:28:55.497 net/mvpp2: not in enabled drivers build config 00:28:55.497 net/netvsc: not in enabled drivers build config 00:28:55.497 net/nfb: not in enabled drivers build config 00:28:55.497 net/nfp: not in enabled drivers build config 00:28:55.497 net/ngbe: not in enabled drivers build config 00:28:55.497 net/null: not in enabled drivers build config 00:28:55.497 net/octeontx: not in enabled drivers build config 00:28:55.497 net/octeon_ep: not in enabled drivers build config 00:28:55.497 net/pcap: not in enabled drivers build config 00:28:55.497 net/pfe: not in enabled drivers build config 00:28:55.497 net/qede: not in enabled drivers build config 00:28:55.497 net/ring: not in enabled drivers build config 00:28:55.497 net/sfc: not in enabled drivers build config 00:28:55.497 net/softnic: not in enabled drivers build config 00:28:55.497 net/tap: not in enabled drivers build config 00:28:55.497 net/thunderx: not in enabled drivers build config 00:28:55.497 net/txgbe: not in enabled drivers build config 00:28:55.497 net/vdev_netvsc: not in enabled drivers build config 00:28:55.497 net/vhost: not in enabled drivers build config 00:28:55.497 net/virtio: not in enabled drivers build config 00:28:55.497 net/vmxnet3: not in enabled drivers build config 00:28:55.497 raw/*: missing internal dependency, "rawdev" 00:28:55.497 crypto/armv8: not in enabled drivers build config 00:28:55.497 crypto/bcmfs: not in enabled drivers build config 00:28:55.497 crypto/caam_jr: not in enabled drivers build config 00:28:55.497 crypto/ccp: not in enabled drivers build config 00:28:55.497 crypto/cnxk: not in enabled drivers build config 00:28:55.497 crypto/dpaa_sec: not in enabled drivers build config 00:28:55.497 crypto/dpaa2_sec: not in enabled drivers build config 00:28:55.497 crypto/ipsec_mb: not in enabled drivers build config 00:28:55.497 crypto/mlx5: not in enabled drivers build config 00:28:55.497 crypto/mvsam: not in enabled drivers build config 00:28:55.497 crypto/nitrox: not in enabled drivers build config 00:28:55.497 crypto/null: not in enabled drivers build config 00:28:55.497 crypto/octeontx: not in enabled drivers build config 00:28:55.497 crypto/openssl: not in enabled drivers build config 00:28:55.497 crypto/scheduler: not in enabled drivers build config 00:28:55.497 crypto/uadk: not in enabled drivers build config 00:28:55.497 crypto/virtio: not in enabled drivers build config 00:28:55.497 compress/isal: not in enabled drivers build config 00:28:55.497 compress/mlx5: not in enabled drivers build config 00:28:55.497 compress/octeontx: not in enabled drivers build config 00:28:55.497 compress/zlib: not in enabled drivers build config 00:28:55.497 regex/*: missing internal dependency, "regexdev" 00:28:55.497 ml/*: missing internal dependency, "mldev" 00:28:55.497 vdpa/ifc: not in enabled drivers build config 00:28:55.497 vdpa/mlx5: not in enabled drivers build config 00:28:55.497 vdpa/nfp: not in enabled drivers build config 00:28:55.497 vdpa/sfc: not in enabled drivers build config 00:28:55.497 event/*: missing internal dependency, "eventdev" 00:28:55.497 baseband/*: missing internal dependency, "bbdev" 00:28:55.497 gpu/*: missing internal dependency, "gpudev" 00:28:55.497 00:28:55.497 00:28:55.756 Build targets in project: 85 00:28:55.756 00:28:55.756 DPDK 23.11.0 00:28:55.756 00:28:55.756 User defined options 00:28:55.756 default_library : static 00:28:55.756 libdir : lib 00:28:55.756 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:28:55.756 b_lto : true 00:28:55.756 b_sanitize : address 00:28:55.756 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:28:55.756 c_link_args : 00:28:55.756 cpu_instruction_set: native 00:28:55.756 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:28:55.756 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:28:55.756 enable_docs : false 00:28:55.756 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:28:55.756 enable_kmods : false 00:28:55.756 tests : false 00:28:55.756 00:28:55.756 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:28:56.323 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:28:56.582 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:28:56.582 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:28:56.582 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:28:56.582 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:28:56.582 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:28:56.582 [6/265] Linking static target lib/librte_kvargs.a 00:28:56.582 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:28:56.582 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:28:56.582 [9/265] Linking static target lib/librte_log.a 00:28:56.840 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:28:56.840 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:28:56.840 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:28:56.840 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:28:56.840 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:28:57.099 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:28:57.099 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:28:57.099 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:28:57.357 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:28:57.357 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:28:57.357 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:28:57.357 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:28:57.357 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:28:57.357 [23/265] Linking target lib/librte_log.so.24.0 00:28:57.357 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:28:57.616 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:28:57.616 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:28:57.616 [27/265] Linking target lib/librte_kvargs.so.24.0 00:28:57.616 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:28:57.874 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:28:57.874 [30/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:28:57.874 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:28:57.874 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:28:57.874 [33/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:28:57.874 [34/265] Linking static target lib/librte_telemetry.a 00:28:57.874 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:28:57.874 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:28:57.874 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:28:57.874 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:28:57.874 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:28:58.132 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:28:58.132 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:28:58.132 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:28:58.391 [43/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:28:58.391 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:28:58.391 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:28:58.649 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:28:58.649 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:28:58.649 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:28:58.649 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:28:58.649 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:28:58.649 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:28:58.906 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:28:58.906 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:28:58.906 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:28:58.906 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:28:58.906 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:28:58.906 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:28:59.166 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:28:59.166 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:28:59.166 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:28:59.166 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:28:59.166 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:28:59.166 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:28:59.166 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:28:59.166 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:28:59.424 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:28:59.424 [67/265] Linking target lib/librte_telemetry.so.24.0 00:28:59.424 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:28:59.682 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:28:59.682 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:28:59.682 [71/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:28:59.682 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:28:59.682 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:28:59.682 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:28:59.682 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:28:59.682 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:28:59.682 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:28:59.682 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:28:59.940 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:29:00.198 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:29:00.198 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:29:00.198 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:29:00.198 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:29:00.198 [84/265] Linking static target lib/librte_ring.a 00:29:00.198 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:29:00.198 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:29:00.457 [87/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:29:00.457 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:29:00.457 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:29:00.715 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:29:00.715 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:29:00.715 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:29:00.715 [93/265] Linking static target lib/librte_eal.a 00:29:00.715 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:29:00.715 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:29:00.715 [96/265] Linking static target lib/librte_mempool.a 00:29:00.973 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:29:00.973 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:29:00.973 [99/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:29:00.973 [100/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:29:00.973 [101/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:29:00.973 [102/265] Linking static target lib/librte_rcu.a 00:29:00.973 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:29:01.230 [104/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:29:01.231 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:29:01.231 [106/265] Linking static target lib/librte_net.a 00:29:01.231 [107/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:29:01.488 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:29:01.488 [109/265] Linking static target lib/librte_meter.a 00:29:01.488 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:29:01.488 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:29:01.488 [112/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:29:01.488 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:29:01.746 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:29:01.746 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:29:01.746 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:29:02.004 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:29:02.263 [118/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:29:02.263 [119/265] Linking static target lib/librte_mbuf.a 00:29:02.263 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:29:02.521 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:29:02.521 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:29:02.779 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:29:02.779 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:29:02.779 [125/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:29:02.779 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:29:02.779 [127/265] Linking static target lib/librte_pci.a 00:29:02.779 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:29:02.779 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:29:03.037 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:29:03.037 [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:29:03.037 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:29:03.295 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:29:03.295 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:29:03.295 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:29:03.295 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:29:03.295 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:29:03.295 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:29:03.295 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:29:03.295 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:29:03.295 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:29:03.553 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:29:03.553 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:29:03.553 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:29:03.553 [145/265] Linking static target lib/librte_cmdline.a 00:29:03.811 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:29:04.070 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:29:04.070 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:29:04.070 [149/265] Linking static target lib/librte_timer.a 00:29:04.070 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:29:04.328 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:29:04.328 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:29:04.328 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:29:04.328 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:29:04.328 [155/265] Linking static target lib/librte_compressdev.a 00:29:04.328 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:29:04.585 [157/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:29:04.585 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:29:04.585 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:29:04.842 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:29:04.842 [161/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:29:04.842 [162/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:29:04.842 [163/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:05.406 [164/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:29:05.406 [165/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:29:05.406 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:29:05.406 [167/265] Linking static target lib/librte_dmadev.a 00:29:05.406 [168/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:29:05.663 [169/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:29:05.663 [170/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:29:05.663 [171/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:29:05.921 [172/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:29:05.921 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:05.921 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:29:06.179 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:29:06.179 [176/265] Linking static target lib/librte_power.a 00:29:06.436 [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:29:06.436 [178/265] Linking static target lib/librte_reorder.a 00:29:06.436 [179/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:29:06.436 [180/265] Linking static target lib/librte_security.a 00:29:06.436 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:29:06.436 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:29:06.693 [183/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:29:06.693 [184/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:29:06.693 [185/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:29:06.950 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:29:07.208 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:29:07.208 [188/265] Linking static target lib/librte_ethdev.a 00:29:07.208 [189/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:29:07.208 [190/265] Linking static target lib/librte_cryptodev.a 00:29:07.465 [191/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:29:07.722 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:29:07.722 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:29:07.722 [194/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:29:07.722 [195/265] Linking static target lib/librte_hash.a 00:29:07.722 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:29:08.287 [197/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:29:08.287 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:29:08.287 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:29:08.545 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:29:08.545 [201/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:08.545 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:29:08.831 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:29:08.831 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:29:09.098 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:29:09.098 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:29:09.098 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:29:09.356 [208/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:29:09.356 [209/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:29:09.356 [210/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:29:09.356 [211/265] Linking static target drivers/librte_bus_vdev.a 00:29:09.356 [212/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:29:09.356 [213/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:29:09.614 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:09.614 [215/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:29:09.615 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:29:09.615 [217/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:29:09.615 [218/265] Linking static target drivers/librte_bus_pci.a 00:29:09.615 [219/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:29:09.615 [220/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:29:09.615 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:29:09.615 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:29:09.615 [223/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:29:09.873 [224/265] Linking static target drivers/librte_mempool_ring.a 00:29:09.873 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:29:13.158 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:29:18.424 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:29:18.424 [228/265] Linking target lib/librte_eal.so.24.0 00:29:18.424 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:29:18.424 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:29:18.683 [230/265] Linking target lib/librte_meter.so.24.0 00:29:18.683 [231/265] Linking target lib/librte_pci.so.24.0 00:29:18.941 [232/265] Linking target lib/librte_ring.so.24.0 00:29:18.941 [233/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:29:18.941 [234/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:29:18.941 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:29:18.941 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:29:18.941 [237/265] Linking target lib/librte_timer.so.24.0 00:29:19.200 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:29:19.200 [239/265] Linking target lib/librte_dmadev.so.24.0 00:29:19.458 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:29:20.025 [241/265] Linking target lib/librte_mempool.so.24.0 00:29:20.025 [242/265] Linking target lib/librte_rcu.so.24.0 00:29:20.025 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:29:20.025 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:29:20.284 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:29:20.542 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:29:21.918 [247/265] Linking target lib/librte_mbuf.so.24.0 00:29:21.918 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:29:22.176 [249/265] Linking target lib/librte_reorder.so.24.0 00:29:22.446 [250/265] Linking target lib/librte_compressdev.so.24.0 00:29:22.718 [251/265] Linking target lib/librte_net.so.24.0 00:29:22.976 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:29:24.353 [253/265] Linking target lib/librte_cmdline.so.24.0 00:29:24.353 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:29:24.353 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:29:24.918 [256/265] Linking target lib/librte_security.so.24.0 00:29:27.446 [257/265] Linking target lib/librte_hash.so.24.0 00:29:27.446 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:29:34.002 [259/265] Linking target lib/librte_ethdev.so.24.0 00:29:34.002 lto-wrapper: warning: using serial compilation of 6 LTRANS jobs 00:29:34.259 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:29:36.196 [261/265] Linking target lib/librte_power.so.24.0 00:29:41.477 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:29:41.477 [263/265] Linking static target lib/librte_vhost.a 00:29:42.853 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:30:29.515 [265/265] Linking target lib/librte_vhost.so.24.0 00:30:29.515 lto-wrapper: warning: using serial compilation of 8 LTRANS jobs 00:30:29.515 INFO: autodetecting backend as ninja 00:30:29.515 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:30:29.515 CC lib/ut_mock/mock.o 00:30:29.515 CC lib/ut/ut.o 00:30:29.515 CC lib/log/log.o 00:30:29.515 CC lib/log/log_flags.o 00:30:29.515 CC lib/log/log_deprecated.o 00:30:29.515 LIB libspdk_ut_mock.a 00:30:29.515 LIB libspdk_log.a 00:30:29.515 LIB libspdk_ut.a 00:30:29.515 CC lib/util/base64.o 00:30:29.515 CC lib/util/bit_array.o 00:30:29.515 CC lib/ioat/ioat.o 00:30:29.515 CXX lib/trace_parser/trace.o 00:30:29.515 CC lib/util/crc16.o 00:30:29.515 CC lib/dma/dma.o 00:30:29.515 CC lib/util/crc32.o 00:30:29.515 CC lib/util/crc32c.o 00:30:29.515 CC lib/util/cpuset.o 00:30:29.515 CC lib/vfio_user/host/vfio_user_pci.o 00:30:29.515 CC lib/util/crc32_ieee.o 00:30:29.515 CC lib/util/crc64.o 00:30:29.515 LIB libspdk_dma.a 00:30:29.515 CC lib/util/dif.o 00:30:29.515 CC lib/util/fd.o 00:30:29.515 CC lib/vfio_user/host/vfio_user.o 00:30:29.515 LIB libspdk_ioat.a 00:30:29.515 CC lib/util/file.o 00:30:29.515 CC lib/util/hexlify.o 00:30:29.515 CC lib/util/iov.o 00:30:29.515 CC lib/util/math.o 00:30:29.515 CC lib/util/pipe.o 00:30:29.515 CC lib/util/strerror_tls.o 00:30:29.515 CC lib/util/string.o 00:30:29.515 CC lib/util/uuid.o 00:30:29.515 CC lib/util/fd_group.o 00:30:29.515 CC lib/util/xor.o 00:30:29.515 LIB libspdk_vfio_user.a 00:30:29.515 CC lib/util/zipf.o 00:30:29.515 LIB libspdk_util.a 00:30:29.515 CC lib/json/json_parse.o 00:30:29.515 CC lib/env_dpdk/env.o 00:30:29.515 CC lib/json/json_util.o 00:30:29.515 CC lib/json/json_write.o 00:30:29.515 CC lib/env_dpdk/memory.o 00:30:29.515 CC lib/vmd/vmd.o 00:30:29.515 CC lib/conf/conf.o 00:30:29.515 CC lib/idxd/idxd.o 00:30:29.515 CC lib/rdma/common.o 00:30:29.515 LIB libspdk_trace_parser.a 00:30:29.515 CC lib/rdma/rdma_verbs.o 00:30:29.515 LIB libspdk_conf.a 00:30:29.515 CC lib/env_dpdk/pci.o 00:30:29.515 CC lib/env_dpdk/init.o 00:30:29.515 LIB libspdk_json.a 00:30:29.515 CC lib/env_dpdk/threads.o 00:30:29.515 CC lib/vmd/led.o 00:30:29.515 CC lib/env_dpdk/pci_ioat.o 00:30:29.515 LIB libspdk_rdma.a 00:30:29.515 CC lib/env_dpdk/pci_virtio.o 00:30:29.515 CC lib/idxd/idxd_user.o 00:30:29.515 CC lib/env_dpdk/pci_vmd.o 00:30:29.515 CC lib/env_dpdk/pci_idxd.o 00:30:29.515 LIB libspdk_vmd.a 00:30:29.515 CC lib/env_dpdk/pci_event.o 00:30:29.515 CC lib/env_dpdk/sigbus_handler.o 00:30:29.515 CC lib/jsonrpc/jsonrpc_server.o 00:30:29.515 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:30:29.515 CC lib/env_dpdk/pci_dpdk.o 00:30:29.515 CC lib/env_dpdk/pci_dpdk_2207.o 00:30:29.515 CC lib/env_dpdk/pci_dpdk_2211.o 00:30:29.515 LIB libspdk_idxd.a 00:30:29.515 CC lib/jsonrpc/jsonrpc_client.o 00:30:29.515 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:30:29.515 LIB libspdk_jsonrpc.a 00:30:29.772 CC lib/rpc/rpc.o 00:30:29.772 LIB libspdk_env_dpdk.a 00:30:29.772 LIB libspdk_rpc.a 00:30:30.029 CC lib/trace/trace.o 00:30:30.029 CC lib/trace/trace_flags.o 00:30:30.029 CC lib/trace/trace_rpc.o 00:30:30.029 CC lib/notify/notify.o 00:30:30.029 CC lib/notify/notify_rpc.o 00:30:30.029 CC lib/sock/sock.o 00:30:30.029 CC lib/sock/sock_rpc.o 00:30:30.029 LIB libspdk_notify.a 00:30:30.029 LIB libspdk_trace.a 00:30:30.285 LIB libspdk_sock.a 00:30:30.285 CC lib/thread/thread.o 00:30:30.285 CC lib/thread/iobuf.o 00:30:30.285 CC lib/nvme/nvme_ctrlr_cmd.o 00:30:30.285 CC lib/nvme/nvme_ctrlr.o 00:30:30.285 CC lib/nvme/nvme_fabric.o 00:30:30.285 CC lib/nvme/nvme_ns_cmd.o 00:30:30.285 CC lib/nvme/nvme_ns.o 00:30:30.285 CC lib/nvme/nvme_pcie_common.o 00:30:30.285 CC lib/nvme/nvme_qpair.o 00:30:30.285 CC lib/nvme/nvme_pcie.o 00:30:30.543 CC lib/nvme/nvme.o 00:30:30.802 LIB libspdk_thread.a 00:30:30.802 CC lib/nvme/nvme_quirks.o 00:30:30.802 CC lib/nvme/nvme_transport.o 00:30:30.802 CC lib/nvme/nvme_discovery.o 00:30:30.802 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:30:30.802 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:30:30.802 CC lib/nvme/nvme_tcp.o 00:30:31.060 CC lib/accel/accel.o 00:30:31.060 CC lib/accel/accel_rpc.o 00:30:31.060 CC lib/accel/accel_sw.o 00:30:31.060 CC lib/blob/blobstore.o 00:30:31.318 CC lib/blob/request.o 00:30:31.318 CC lib/blob/zeroes.o 00:30:31.318 CC lib/nvme/nvme_opal.o 00:30:31.318 CC lib/nvme/nvme_io_msg.o 00:30:31.318 CC lib/blob/blob_bs_dev.o 00:30:31.318 CC lib/nvme/nvme_poll_group.o 00:30:31.318 CC lib/nvme/nvme_zns.o 00:30:31.318 CC lib/nvme/nvme_cuse.o 00:30:31.318 LIB libspdk_accel.a 00:30:31.318 CC lib/nvme/nvme_vfio_user.o 00:30:31.577 CC lib/init/json_config.o 00:30:31.577 CC lib/init/subsystem.o 00:30:31.577 CC lib/init/subsystem_rpc.o 00:30:31.577 CC lib/nvme/nvme_rdma.o 00:30:31.577 CC lib/init/rpc.o 00:30:31.577 CC lib/virtio/virtio.o 00:30:31.836 CC lib/virtio/virtio_vhost_user.o 00:30:31.836 LIB libspdk_init.a 00:30:31.836 CC lib/virtio/virtio_vfio_user.o 00:30:31.836 CC lib/virtio/virtio_pci.o 00:30:31.836 CC lib/bdev/bdev.o 00:30:31.836 CC lib/bdev/bdev_rpc.o 00:30:31.836 CC lib/bdev/bdev_zone.o 00:30:31.836 CC lib/event/app.o 00:30:31.836 CC lib/bdev/part.o 00:30:31.836 CC lib/bdev/scsi_nvme.o 00:30:31.836 CC lib/event/reactor.o 00:30:32.094 LIB libspdk_virtio.a 00:30:32.094 CC lib/event/log_rpc.o 00:30:32.094 CC lib/event/app_rpc.o 00:30:32.094 CC lib/event/scheduler_static.o 00:30:32.094 LIB libspdk_event.a 00:30:32.352 LIB libspdk_blob.a 00:30:32.352 LIB libspdk_nvme.a 00:30:32.352 CC lib/blobfs/tree.o 00:30:32.352 CC lib/blobfs/blobfs.o 00:30:32.352 CC lib/lvol/lvol.o 00:30:32.918 LIB libspdk_blobfs.a 00:30:32.918 LIB libspdk_lvol.a 00:30:32.918 LIB libspdk_bdev.a 00:30:32.918 CC lib/nvmf/ctrlr_discovery.o 00:30:32.918 CC lib/nvmf/ctrlr.o 00:30:32.918 CC lib/scsi/dev.o 00:30:32.918 CC lib/nvmf/ctrlr_bdev.o 00:30:32.918 CC lib/nvmf/nvmf.o 00:30:32.918 CC lib/nvmf/subsystem.o 00:30:32.918 CC lib/scsi/port.o 00:30:32.918 CC lib/scsi/lun.o 00:30:32.918 CC lib/nbd/nbd.o 00:30:32.918 CC lib/ftl/ftl_core.o 00:30:33.176 CC lib/ftl/ftl_init.o 00:30:33.176 CC lib/ftl/ftl_layout.o 00:30:33.176 CC lib/ftl/ftl_debug.o 00:30:33.176 CC lib/scsi/scsi.o 00:30:33.176 CC lib/scsi/scsi_bdev.o 00:30:33.176 CC lib/nbd/nbd_rpc.o 00:30:33.435 CC lib/scsi/scsi_pr.o 00:30:33.435 CC lib/ftl/ftl_io.o 00:30:33.435 CC lib/ftl/ftl_sb.o 00:30:33.435 CC lib/scsi/scsi_rpc.o 00:30:33.435 CC lib/scsi/task.o 00:30:33.435 CC lib/ftl/ftl_l2p.o 00:30:33.435 LIB libspdk_nbd.a 00:30:33.435 CC lib/nvmf/nvmf_rpc.o 00:30:33.435 CC lib/nvmf/transport.o 00:30:33.435 CC lib/ftl/ftl_l2p_flat.o 00:30:33.435 CC lib/nvmf/tcp.o 00:30:33.435 CC lib/nvmf/rdma.o 00:30:33.435 CC lib/ftl/ftl_nv_cache.o 00:30:33.435 LIB libspdk_scsi.a 00:30:33.435 CC lib/ftl/ftl_band.o 00:30:33.704 CC lib/ftl/ftl_band_ops.o 00:30:33.704 CC lib/ftl/ftl_writer.o 00:30:33.704 CC lib/iscsi/conn.o 00:30:33.704 CC lib/iscsi/init_grp.o 00:30:33.704 CC lib/vhost/vhost.o 00:30:33.704 CC lib/vhost/vhost_rpc.o 00:30:33.704 CC lib/vhost/vhost_scsi.o 00:30:33.704 CC lib/vhost/vhost_blk.o 00:30:33.704 CC lib/vhost/rte_vhost_user.o 00:30:33.988 CC lib/iscsi/iscsi.o 00:30:33.988 CC lib/ftl/ftl_rq.o 00:30:33.988 CC lib/iscsi/md5.o 00:30:33.988 CC lib/iscsi/param.o 00:30:34.246 CC lib/ftl/ftl_reloc.o 00:30:34.246 CC lib/iscsi/portal_grp.o 00:30:34.246 CC lib/iscsi/tgt_node.o 00:30:34.246 LIB libspdk_nvmf.a 00:30:34.246 CC lib/iscsi/iscsi_subsystem.o 00:30:34.246 CC lib/iscsi/iscsi_rpc.o 00:30:34.246 CC lib/iscsi/task.o 00:30:34.246 CC lib/ftl/ftl_l2p_cache.o 00:30:34.246 CC lib/ftl/ftl_p2l.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:30:34.503 LIB libspdk_vhost.a 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_startup.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_md.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_misc.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:30:34.503 LIB libspdk_iscsi.a 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_band.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:30:34.503 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:30:34.761 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:30:34.761 CC lib/ftl/utils/ftl_conf.o 00:30:34.761 CC lib/ftl/utils/ftl_md.o 00:30:34.761 CC lib/ftl/utils/ftl_mempool.o 00:30:34.761 CC lib/ftl/utils/ftl_bitmap.o 00:30:34.761 CC lib/ftl/utils/ftl_property.o 00:30:34.761 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:30:34.761 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:30:34.761 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:30:34.761 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:30:34.761 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:30:34.761 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:30:34.761 CC lib/ftl/upgrade/ftl_sb_v3.o 00:30:34.761 CC lib/ftl/upgrade/ftl_sb_v5.o 00:30:35.019 CC lib/ftl/nvc/ftl_nvc_dev.o 00:30:35.019 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:30:35.019 CC lib/ftl/base/ftl_base_dev.o 00:30:35.019 CC lib/ftl/base/ftl_base_bdev.o 00:30:35.278 LIB libspdk_ftl.a 00:30:35.278 CC module/env_dpdk/env_dpdk_rpc.o 00:30:35.278 CC module/blob/bdev/blob_bdev.o 00:30:35.278 CC module/accel/ioat/accel_ioat.o 00:30:35.278 CC module/accel/error/accel_error.o 00:30:35.278 CC module/accel/iaa/accel_iaa.o 00:30:35.278 CC module/scheduler/gscheduler/gscheduler.o 00:30:35.278 CC module/scheduler/dynamic/scheduler_dynamic.o 00:30:35.278 CC module/sock/posix/posix.o 00:30:35.278 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:30:35.278 CC module/accel/dsa/accel_dsa.o 00:30:35.535 LIB libspdk_env_dpdk_rpc.a 00:30:35.535 CC module/accel/ioat/accel_ioat_rpc.o 00:30:35.535 LIB libspdk_scheduler_gscheduler.a 00:30:35.535 CC module/accel/dsa/accel_dsa_rpc.o 00:30:35.535 LIB libspdk_scheduler_dpdk_governor.a 00:30:35.535 CC module/accel/error/accel_error_rpc.o 00:30:35.535 CC module/accel/iaa/accel_iaa_rpc.o 00:30:35.535 LIB libspdk_blob_bdev.a 00:30:35.535 LIB libspdk_scheduler_dynamic.a 00:30:35.535 LIB libspdk_accel_dsa.a 00:30:35.535 LIB libspdk_accel_ioat.a 00:30:35.535 LIB libspdk_accel_iaa.a 00:30:35.535 CC module/bdev/error/vbdev_error.o 00:30:35.535 CC module/blobfs/bdev/blobfs_bdev.o 00:30:35.535 CC module/bdev/gpt/gpt.o 00:30:35.535 LIB libspdk_accel_error.a 00:30:35.535 CC module/bdev/gpt/vbdev_gpt.o 00:30:35.536 CC module/bdev/delay/vbdev_delay.o 00:30:35.536 CC module/bdev/lvol/vbdev_lvol.o 00:30:35.793 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:30:35.793 CC module/bdev/null/bdev_null.o 00:30:35.793 CC module/bdev/malloc/bdev_malloc.o 00:30:35.793 LIB libspdk_sock_posix.a 00:30:35.793 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:30:35.793 CC module/bdev/delay/vbdev_delay_rpc.o 00:30:35.793 CC module/bdev/malloc/bdev_malloc_rpc.o 00:30:35.793 LIB libspdk_blobfs_bdev.a 00:30:35.793 CC module/bdev/error/vbdev_error_rpc.o 00:30:35.793 CC module/bdev/null/bdev_null_rpc.o 00:30:35.793 LIB libspdk_bdev_gpt.a 00:30:36.051 LIB libspdk_bdev_delay.a 00:30:36.051 CC module/bdev/nvme/bdev_nvme.o 00:30:36.051 LIB libspdk_bdev_malloc.a 00:30:36.051 CC module/bdev/raid/bdev_raid.o 00:30:36.051 CC module/bdev/passthru/vbdev_passthru.o 00:30:36.051 CC module/bdev/raid/bdev_raid_rpc.o 00:30:36.051 CC module/bdev/raid/bdev_raid_sb.o 00:30:36.051 LIB libspdk_bdev_lvol.a 00:30:36.051 LIB libspdk_bdev_error.a 00:30:36.051 LIB libspdk_bdev_null.a 00:30:36.051 CC module/bdev/split/vbdev_split.o 00:30:36.051 CC module/bdev/raid/raid0.o 00:30:36.051 CC module/bdev/raid/raid1.o 00:30:36.051 CC module/bdev/raid/concat.o 00:30:36.051 CC module/bdev/zone_block/vbdev_zone_block.o 00:30:36.051 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:30:36.051 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:30:36.051 CC module/bdev/split/vbdev_split_rpc.o 00:30:36.309 CC module/bdev/aio/bdev_aio.o 00:30:36.309 CC module/bdev/aio/bdev_aio_rpc.o 00:30:36.309 LIB libspdk_bdev_zone_block.a 00:30:36.309 CC module/bdev/ftl/bdev_ftl.o 00:30:36.309 CC module/bdev/iscsi/bdev_iscsi.o 00:30:36.309 CC module/bdev/ftl/bdev_ftl_rpc.o 00:30:36.309 LIB libspdk_bdev_passthru.a 00:30:36.309 CC module/bdev/virtio/bdev_virtio_scsi.o 00:30:36.309 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:30:36.309 LIB libspdk_bdev_split.a 00:30:36.309 CC module/bdev/nvme/bdev_nvme_rpc.o 00:30:36.309 CC module/bdev/nvme/nvme_rpc.o 00:30:36.309 LIB libspdk_bdev_raid.a 00:30:36.309 CC module/bdev/virtio/bdev_virtio_blk.o 00:30:36.309 CC module/bdev/virtio/bdev_virtio_rpc.o 00:30:36.309 CC module/bdev/nvme/bdev_mdns_client.o 00:30:36.309 LIB libspdk_bdev_ftl.a 00:30:36.567 LIB libspdk_bdev_aio.a 00:30:36.567 CC module/bdev/nvme/vbdev_opal.o 00:30:36.567 CC module/bdev/nvme/vbdev_opal_rpc.o 00:30:36.567 LIB libspdk_bdev_iscsi.a 00:30:36.567 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:30:36.567 LIB libspdk_bdev_virtio.a 00:30:36.826 LIB libspdk_bdev_nvme.a 00:30:37.084 CC module/event/subsystems/sock/sock.o 00:30:37.084 CC module/event/subsystems/vmd/vmd.o 00:30:37.084 CC module/event/subsystems/vmd/vmd_rpc.o 00:30:37.084 CC module/event/subsystems/scheduler/scheduler.o 00:30:37.084 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:30:37.084 CC module/event/subsystems/iobuf/iobuf.o 00:30:37.084 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:30:37.084 LIB libspdk_event_sock.a 00:30:37.084 LIB libspdk_event_scheduler.a 00:30:37.084 LIB libspdk_event_vhost_blk.a 00:30:37.084 LIB libspdk_event_vmd.a 00:30:37.084 LIB libspdk_event_iobuf.a 00:30:37.342 CC module/event/subsystems/accel/accel.o 00:30:37.342 LIB libspdk_event_accel.a 00:30:37.601 CC module/event/subsystems/bdev/bdev.o 00:30:37.601 LIB libspdk_event_bdev.a 00:30:37.859 CC module/event/subsystems/nbd/nbd.o 00:30:37.860 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:30:37.860 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:30:37.860 CC module/event/subsystems/scsi/scsi.o 00:30:37.860 LIB libspdk_event_nbd.a 00:30:37.860 LIB libspdk_event_scsi.a 00:30:38.118 LIB libspdk_event_nvmf.a 00:30:38.118 CC module/event/subsystems/iscsi/iscsi.o 00:30:38.118 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:30:38.118 LIB libspdk_event_iscsi.a 00:30:38.118 LIB libspdk_event_vhost_scsi.a 00:30:38.377 CXX app/trace/trace.o 00:30:38.377 CC app/trace_record/trace_record.o 00:30:38.377 CC examples/nvme/hello_world/hello_world.o 00:30:38.377 CC examples/sock/hello_world/hello_sock.o 00:30:38.377 CC examples/ioat/perf/perf.o 00:30:38.377 CC examples/vmd/lsvmd/lsvmd.o 00:30:38.377 CC examples/accel/perf/accel_perf.o 00:30:38.377 CC examples/blob/hello_world/hello_blob.o 00:30:38.377 CC examples/bdev/hello_world/hello_bdev.o 00:30:38.377 CC test/accel/dif/dif.o 00:30:38.377 LINK lsvmd 00:30:38.636 LINK spdk_trace_record 00:30:38.636 LINK ioat_perf 00:30:38.636 LINK hello_sock 00:30:38.636 LINK hello_world 00:30:38.636 LINK spdk_trace 00:30:38.636 LINK hello_bdev 00:30:38.636 LINK hello_blob 00:30:38.636 LINK accel_perf 00:30:38.636 LINK dif 00:30:50.877 CC examples/bdev/bdevperf/bdevperf.o 00:30:53.405 LINK bdevperf 00:30:57.586 CC app/nvmf_tgt/nvmf_main.o 00:30:58.959 LINK nvmf_tgt 00:31:08.952 CC examples/ioat/verify/verify.o 00:31:09.209 LINK verify 00:31:17.321 CC examples/vmd/led/led.o 00:31:17.321 CC app/iscsi_tgt/iscsi_tgt.o 00:31:17.321 LINK led 00:31:18.256 LINK iscsi_tgt 00:31:22.456 CC app/spdk_tgt/spdk_tgt.o 00:31:23.023 LINK spdk_tgt 00:31:24.920 CC examples/nvme/reconnect/reconnect.o 00:31:26.819 LINK reconnect 00:31:41.712 CC examples/nvme/nvme_manage/nvme_manage.o 00:31:41.970 LINK nvme_manage 00:32:20.814 CC examples/nvme/arbitration/arbitration.o 00:32:20.814 LINK arbitration 00:32:38.893 CC examples/blob/cli/blobcli.o 00:32:40.269 LINK blobcli 00:33:02.193 CC examples/nvme/hotplug/hotplug.o 00:33:02.193 LINK hotplug 00:33:06.383 CC app/spdk_lspci/spdk_lspci.o 00:33:06.642 LINK spdk_lspci 00:33:07.210 CC app/spdk_nvme_perf/perf.o 00:33:11.408 LINK spdk_nvme_perf 00:33:12.342 CC test/app/bdev_svc/bdev_svc.o 00:33:13.277 LINK bdev_svc 00:33:21.389 CC examples/nvme/cmb_copy/cmb_copy.o 00:33:21.389 LINK cmb_copy 00:33:33.589 CC examples/nvme/abort/abort.o 00:33:33.848 LINK abort 00:34:05.913 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:34:05.913 LINK nvme_fuzz 00:34:05.913 CC app/spdk_nvme_identify/identify.o 00:34:08.439 CC app/spdk_nvme_discover/discovery_aer.o 00:34:09.372 LINK spdk_nvme_identify 00:34:09.372 LINK spdk_nvme_discover 00:34:19.342 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:34:19.600 CC test/app/histogram_perf/histogram_perf.o 00:34:20.536 LINK histogram_perf 00:34:22.437 CC test/app/jsoncat/jsoncat.o 00:34:23.421 LINK jsoncat 00:34:25.958 LINK iscsi_fuzz 00:34:38.160 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:34:38.160 LINK pmr_persistence 00:35:00.090 CC app/spdk_top/spdk_top.o 00:35:00.090 CC test/app/stub/stub.o 00:35:00.090 LINK stub 00:35:00.090 LINK spdk_top 00:35:01.467 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:35:02.035 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:35:02.035 CC app/spdk_dd/spdk_dd.o 00:35:02.035 CC app/vhost/vhost.o 00:35:02.972 LINK vhost 00:35:02.972 LINK vhost_fuzz 00:35:03.230 LINK spdk_dd 00:35:06.512 CC examples/nvmf/nvmf/nvmf.o 00:35:07.449 LINK nvmf 00:35:08.384 CC app/fio/nvme/fio_plugin.o 00:35:10.946 LINK spdk_nvme 00:35:17.504 CC app/fio/bdev/fio_plugin.o 00:35:18.070 LINK spdk_bdev 00:35:20.603 CC examples/util/zipf/zipf.o 00:35:21.170 LINK zipf 00:35:27.734 CC examples/thread/thread/thread_ex.o 00:35:28.299 LINK thread 00:35:31.602 CC examples/idxd/perf/perf.o 00:35:32.977 LINK idxd_perf 00:35:41.091 CC examples/interrupt_tgt/interrupt_tgt.o 00:35:41.350 LINK interrupt_tgt 00:35:46.620 CC test/bdev/bdevio/bdevio.o 00:35:47.997 LINK bdevio 00:35:53.271 CC test/blobfs/mkfs/mkfs.o 00:35:54.206 LINK mkfs 00:36:12.322 TEST_HEADER include/spdk/config.h 00:36:12.323 CXX test/cpp_headers/accel.o 00:36:13.259 CXX test/cpp_headers/accel_module.o 00:36:14.634 CXX test/cpp_headers/assert.o 00:36:15.567 CXX test/cpp_headers/barrier.o 00:36:17.467 CXX test/cpp_headers/base64.o 00:36:18.841 CXX test/cpp_headers/bdev.o 00:36:20.218 CXX test/cpp_headers/bdev_module.o 00:36:22.123 CXX test/cpp_headers/bdev_zone.o 00:36:24.028 CXX test/cpp_headers/bit_array.o 00:36:25.403 CXX test/cpp_headers/bit_pool.o 00:36:26.778 CXX test/cpp_headers/blob.o 00:36:28.680 CXX test/cpp_headers/blob_bdev.o 00:36:30.584 CXX test/cpp_headers/blobfs.o 00:36:31.962 CXX test/cpp_headers/blobfs_bdev.o 00:36:33.864 CXX test/cpp_headers/conf.o 00:36:35.244 CXX test/cpp_headers/config.o 00:36:35.521 CXX test/cpp_headers/cpuset.o 00:36:36.909 CXX test/cpp_headers/crc16.o 00:36:38.284 CXX test/cpp_headers/crc32.o 00:36:40.184 CXX test/cpp_headers/crc64.o 00:36:41.562 CXX test/cpp_headers/dif.o 00:36:42.938 CXX test/cpp_headers/dma.o 00:36:44.311 CXX test/cpp_headers/endian.o 00:36:45.687 CXX test/cpp_headers/env.o 00:36:47.072 CXX test/cpp_headers/env_dpdk.o 00:36:48.447 CXX test/cpp_headers/event.o 00:36:50.348 CXX test/cpp_headers/fd.o 00:36:51.722 CXX test/cpp_headers/fd_group.o 00:36:53.622 CXX test/cpp_headers/file.o 00:36:54.995 CXX test/cpp_headers/ftl.o 00:36:56.895 CXX test/cpp_headers/gpt_spec.o 00:36:58.294 CXX test/cpp_headers/hexlify.o 00:37:00.194 CXX test/cpp_headers/histogram_data.o 00:37:01.570 CXX test/cpp_headers/idxd.o 00:37:03.472 CXX test/cpp_headers/idxd_spec.o 00:37:04.846 CXX test/cpp_headers/init.o 00:37:06.221 CXX test/cpp_headers/ioat.o 00:37:08.125 CXX test/cpp_headers/ioat_spec.o 00:37:09.505 CXX test/cpp_headers/iscsi_spec.o 00:37:11.410 CXX test/cpp_headers/json.o 00:37:12.788 CXX test/cpp_headers/jsonrpc.o 00:37:14.182 CXX test/cpp_headers/likely.o 00:37:16.099 CXX test/cpp_headers/log.o 00:37:17.474 CXX test/cpp_headers/lvol.o 00:37:19.378 CXX test/cpp_headers/memory.o 00:37:20.755 CXX test/cpp_headers/mmio.o 00:37:22.130 CXX test/cpp_headers/nbd.o 00:37:22.388 CXX test/cpp_headers/notify.o 00:37:23.764 CXX test/cpp_headers/nvme.o 00:37:25.139 CXX test/cpp_headers/nvme_intel.o 00:37:26.515 CXX test/cpp_headers/nvme_ocssd.o 00:37:27.901 CXX test/cpp_headers/nvme_ocssd_spec.o 00:37:29.274 CXX test/cpp_headers/nvme_spec.o 00:37:30.647 CXX test/cpp_headers/nvme_zns.o 00:37:30.905 CXX test/cpp_headers/nvmf.o 00:37:31.839 CXX test/cpp_headers/nvmf_cmd.o 00:37:32.417 CXX test/cpp_headers/nvmf_fc_spec.o 00:37:32.417 CXX test/cpp_headers/nvmf_spec.o 00:37:33.025 CXX test/cpp_headers/nvmf_transport.o 00:37:33.591 CXX test/cpp_headers/opal.o 00:37:33.591 CXX test/cpp_headers/opal_spec.o 00:37:34.538 CXX test/cpp_headers/pci_ids.o 00:37:34.538 CXX test/cpp_headers/pipe.o 00:37:35.103 CXX test/cpp_headers/queue.o 00:37:35.103 CXX test/cpp_headers/reduce.o 00:37:35.672 CXX test/cpp_headers/rpc.o 00:37:35.930 CXX test/cpp_headers/scheduler.o 00:37:35.930 CXX test/cpp_headers/scsi.o 00:37:36.864 CXX test/cpp_headers/scsi_spec.o 00:37:36.864 CXX test/cpp_headers/sock.o 00:37:37.799 CXX test/cpp_headers/stdinc.o 00:37:37.799 CXX test/cpp_headers/string.o 00:37:38.057 CC test/dma/test_dma/test_dma.o 00:37:38.623 CXX test/cpp_headers/thread.o 00:37:38.882 CXX test/cpp_headers/trace.o 00:37:39.447 LINK test_dma 00:37:39.706 CXX test/cpp_headers/trace_parser.o 00:37:39.706 CXX test/cpp_headers/tree.o 00:37:39.963 CXX test/cpp_headers/ublk.o 00:37:40.896 CXX test/cpp_headers/util.o 00:37:40.896 CXX test/cpp_headers/uuid.o 00:37:41.828 CXX test/cpp_headers/version.o 00:37:41.828 CXX test/cpp_headers/vfio_user_pci.o 00:37:42.762 CXX test/cpp_headers/vfio_user_spec.o 00:37:43.020 CC test/env/mem_callbacks/mem_callbacks.o 00:37:43.955 CXX test/cpp_headers/vhost.o 00:37:44.888 CXX test/cpp_headers/vmd.o 00:37:45.455 LINK mem_callbacks 00:37:45.712 CXX test/cpp_headers/xor.o 00:37:46.644 CXX test/cpp_headers/zipf.o 00:37:48.544 CC test/event/event_perf/event_perf.o 00:37:49.110 LINK event_perf 00:37:51.640 CC test/event/reactor/reactor.o 00:37:51.899 LINK reactor 00:38:01.871 CC test/event/reactor_perf/reactor_perf.o 00:38:01.871 LINK reactor_perf 00:38:02.809 CC test/event/app_repeat/app_repeat.o 00:38:03.745 LINK app_repeat 00:38:18.621 CC test/env/vtophys/vtophys.o 00:38:18.621 LINK vtophys 00:38:28.590 CC test/lvol/esnap/esnap.o 00:38:28.590 CC test/event/scheduler/scheduler.o 00:38:28.590 CC test/nvme/aer/aer.o 00:38:28.590 LINK scheduler 00:38:28.590 CC test/nvme/reset/reset.o 00:38:29.157 LINK aer 00:38:29.725 LINK reset 00:38:29.985 CC test/nvme/sgl/sgl.o 00:38:30.955 LINK sgl 00:38:39.068 LINK esnap 00:38:39.635 CC test/nvme/e2edp/nvme_dp.o 00:38:41.013 LINK nvme_dp 00:38:49.127 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:38:50.063 LINK env_dpdk_post_init 00:39:04.969 CC test/env/memory/memory_ut.o 00:39:08.252 LINK memory_ut 00:39:08.252 CC test/nvme/overhead/overhead.o 00:39:09.626 LINK overhead 00:39:19.602 CC test/env/pci/pci_ut.o 00:39:19.602 CC test/nvme/err_injection/err_injection.o 00:39:19.862 LINK pci_ut 00:39:20.429 LINK err_injection 00:39:25.699 CC test/nvme/startup/startup.o 00:39:26.265 LINK startup 00:39:27.200 CC test/nvme/reserve/reserve.o 00:39:27.200 CC test/nvme/simple_copy/simple_copy.o 00:39:28.135 LINK reserve 00:39:28.393 LINK simple_copy 00:39:43.269 CC test/nvme/connect_stress/connect_stress.o 00:39:43.269 LINK connect_stress 00:39:43.269 CC test/nvme/boot_partition/boot_partition.o 00:39:43.837 LINK boot_partition 00:39:50.412 CC test/rpc_client/rpc_client_test.o 00:39:50.980 LINK rpc_client_test 00:40:03.179 CC test/nvme/compliance/nvme_compliance.o 00:40:05.077 LINK nvme_compliance 00:40:07.604 CC test/thread/poller_perf/poller_perf.o 00:40:08.980 LINK poller_perf 00:40:21.207 CC test/nvme/fused_ordering/fused_ordering.o 00:40:21.207 LINK fused_ordering 00:40:24.488 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:40:24.746 LINK histogram_ut 00:40:25.681 CC test/nvme/doorbell_aers/doorbell_aers.o 00:40:26.617 LINK doorbell_aers 00:40:27.993 CC test/unit/lib/accel/accel.c/accel_ut.o 00:40:28.560 CC test/nvme/fdp/fdp.o 00:40:28.560 CC test/nvme/cuse/cuse.o 00:40:29.936 LINK fdp 00:40:33.221 LINK cuse 00:40:33.221 LINK accel_ut 00:40:35.121 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:40:37.676 CC test/unit/lib/bdev/part.c/part_ut.o 00:40:44.239 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:40:44.804 LINK scsi_nvme_ut 00:40:45.369 CC test/thread/lock/spdk_lock.o 00:40:46.303 LINK part_ut 00:40:47.678 LINK bdev_ut 00:40:47.678 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:40:47.936 LINK spdk_lock 00:40:49.309 LINK gpt_ut 00:40:49.309 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:40:49.567 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:40:52.107 LINK vbdev_lvol_ut 00:40:54.008 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:40:55.937 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:40:56.871 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:40:57.130 LINK bdev_raid_sb_ut 00:40:58.065 LINK bdev_zone_ut 00:40:58.065 LINK bdev_raid_ut 00:40:58.999 LINK bdev_ut 00:41:02.284 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:41:02.851 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:41:03.418 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:41:03.985 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:41:04.549 LINK vbdev_zone_block_ut 00:41:04.549 LINK concat_ut 00:41:04.807 LINK raid1_ut 00:41:07.333 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:41:08.268 LINK blob_bdev_ut 00:41:08.268 CC test/unit/lib/blob/blob.c/blob_ut.o 00:41:09.201 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:41:09.201 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:41:09.459 CC test/unit/lib/dma/dma.c/dma_ut.o 00:41:09.459 LINK bdev_nvme_ut 00:41:09.459 LINK tree_ut 00:41:09.717 LINK dma_ut 00:41:09.717 CC test/unit/lib/event/app.c/app_ut.o 00:41:09.717 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:41:09.975 LINK blobfs_async_ut 00:41:10.233 LINK app_ut 00:41:10.233 LINK reactor_ut 00:41:10.490 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:41:11.063 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:41:12.039 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:41:12.039 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:41:12.039 LINK ioat_ut 00:41:12.039 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:41:12.297 LINK blobfs_sync_ut 00:41:13.230 LINK json_util_ut 00:41:13.488 LINK conn_ut 00:41:13.746 LINK blob_ut 00:41:13.746 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:41:14.311 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:41:14.876 LINK json_parse_ut 00:41:15.134 LINK init_grp_ut 00:41:15.700 LINK json_write_ut 00:41:16.267 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:41:18.168 CC test/unit/lib/iscsi/param.c/param_ut.o 00:41:18.168 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:41:19.101 LINK param_ut 00:41:19.101 LINK portal_grp_ut 00:41:19.360 LINK iscsi_ut 00:41:19.618 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:41:20.993 LINK tgt_node_ut 00:41:20.993 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:41:21.251 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:41:21.508 LINK blobfs_bdev_ut 00:41:22.074 LINK jsonrpc_server_ut 00:41:22.074 CC test/unit/lib/log/log.c/log_ut.o 00:41:22.332 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:41:22.898 LINK log_ut 00:41:24.273 CC test/unit/lib/notify/notify.c/notify_ut.o 00:41:24.273 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:41:24.273 LINK notify_ut 00:41:24.273 LINK lvol_ut 00:41:24.531 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:41:24.789 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:41:25.047 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:41:25.305 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:41:25.305 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:41:25.305 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:41:25.305 LINK nvme_ut 00:41:25.871 LINK ctrlr_bdev_ut 00:41:26.129 LINK ctrlr_ut 00:41:26.129 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:41:26.696 LINK ctrlr_discovery_ut 00:41:26.696 LINK tcp_ut 00:41:26.696 LINK subsystem_ut 00:41:27.294 LINK nvme_ctrlr_cmd_ut 00:41:28.229 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:41:28.229 LINK nvme_ctrlr_ut 00:41:30.761 LINK nvme_ctrlr_ocssd_cmd_ut 00:41:30.761 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:41:32.136 LINK nvmf_ut 00:41:32.136 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:41:32.702 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:41:32.961 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:41:32.961 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:41:33.220 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:41:33.220 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:41:33.220 LINK nvme_ns_ut 00:41:33.478 CC test/unit/lib/sock/sock.c/sock_ut.o 00:41:33.478 LINK rdma_ut 00:41:33.478 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:41:33.478 LINK dev_ut 00:41:34.046 LINK nvme_ns_cmd_ut 00:41:34.046 LINK sock_ut 00:41:34.305 LINK transport_ut 00:41:34.305 LINK nvme_ns_ocssd_cmd_ut 00:41:34.871 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:41:35.438 LINK nvme_pcie_ut 00:41:36.372 LINK nvme_poll_group_ut 00:41:36.936 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:41:37.867 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:41:37.867 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:41:38.432 LINK nvme_qpair_ut 00:41:38.432 LINK lun_ut 00:41:38.432 LINK nvme_quirks_ut 00:41:38.690 CC test/unit/lib/sock/posix.c/posix_ut.o 00:41:38.949 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:41:39.207 LINK scsi_ut 00:41:39.466 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:41:39.466 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:41:39.466 LINK posix_ut 00:41:40.034 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:41:40.034 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:41:40.292 LINK nvme_transport_ut 00:41:40.551 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:41:40.551 LINK scsi_bdev_ut 00:41:40.551 LINK nvme_tcp_ut 00:41:40.809 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:41:40.809 LINK scsi_pr_ut 00:41:40.809 LINK nvme_io_msg_ut 00:41:41.068 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:41:41.068 CC test/unit/lib/thread/thread.c/thread_ut.o 00:41:41.634 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:41:41.634 LINK nvme_pcie_common_ut 00:41:41.893 LINK nvme_fabric_ut 00:41:41.893 LINK nvme_opal_ut 00:41:42.460 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:41:42.735 LINK thread_ut 00:41:42.735 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:41:42.735 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:41:43.347 LINK iobuf_ut 00:41:43.606 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:41:43.606 CC test/unit/lib/util/base64.c/base64_ut.o 00:41:43.606 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:41:43.865 LINK nvme_cuse_ut 00:41:43.865 LINK nvme_rdma_ut 00:41:43.865 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:41:44.130 LINK base64_ut 00:41:44.130 LINK pci_event_ut 00:41:44.130 LINK subsystem_ut 00:41:44.392 LINK rpc_ut 00:41:45.326 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:41:45.326 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:41:45.584 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:41:45.584 LINK idxd_user_ut 00:41:45.843 LINK idxd_ut 00:41:45.843 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:41:45.843 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:41:45.843 CC test/unit/lib/rdma/common.c/common_ut.o 00:41:45.843 LINK bit_array_ut 00:41:46.101 LINK cpuset_ut 00:41:46.101 LINK common_ut 00:41:46.101 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:41:46.359 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:41:46.359 LINK crc16_ut 00:41:46.359 LINK ftl_l2p_ut 00:41:46.618 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:41:46.618 LINK crc32_ieee_ut 00:41:46.618 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:41:46.876 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:41:46.876 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:41:46.876 CC test/unit/lib/util/dif.c/dif_ut.o 00:41:46.876 LINK crc32c_ut 00:41:46.876 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:41:46.876 LINK crc64_ut 00:41:46.877 LINK vhost_ut 00:41:47.135 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:41:47.135 LINK ftl_bitmap_ut 00:41:47.135 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:41:47.135 LINK ftl_io_ut 00:41:47.394 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:41:47.394 CC test/unit/lib/util/iov.c/iov_ut.o 00:41:47.394 LINK ftl_band_ut 00:41:47.394 LINK dif_ut 00:41:47.653 LINK ftl_mempool_ut 00:41:47.653 CC test/unit/lib/util/math.c/math_ut.o 00:41:47.912 LINK ftl_mngt_ut 00:41:47.912 LINK iov_ut 00:41:47.912 LINK math_ut 00:41:48.480 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:41:48.739 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:41:49.306 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:41:49.565 LINK ftl_layout_upgrade_ut 00:41:49.823 LINK ftl_sb_ut 00:41:49.824 CC test/unit/lib/util/string.c/string_ut.o 00:41:49.824 LINK pipe_ut 00:41:50.082 LINK string_ut 00:41:50.341 CC test/unit/lib/util/xor.c/xor_ut.o 00:41:50.341 LINK xor_ut 00:42:46.592 json_parse_ut.c: In function ‘test_parse_nesting’: 00:42:46.592 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:42:46.593 616 | test_parse_nesting(void) 00:42:46.593 | ^ 00:42:46.593 21:25:11 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:42:46.593 make[1]: Nothing to be done for 'clean'. 00:42:47.527 21:25:15 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:42:47.527 21:25:15 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:42:47.527 21:25:15 -- common/autotest_common.sh@10 -- $ set +x 00:42:47.527 21:25:15 -- spdk/autopackage.sh@48 -- $ timing_finish 00:42:47.527 21:25:15 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:47.527 21:25:15 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:47.527 21:25:15 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:42:47.527 + [[ -n 2082 ]] 00:42:47.527 + sudo kill 2082 00:42:47.537 [Pipeline] } 00:42:47.555 [Pipeline] // timeout 00:42:47.559 [Pipeline] } 00:42:47.574 [Pipeline] // stage 00:42:47.578 [Pipeline] } 00:42:47.593 [Pipeline] // catchError 00:42:47.601 [Pipeline] stage 00:42:47.603 [Pipeline] { (Stop VM) 00:42:47.615 [Pipeline] sh 00:42:47.889 + vagrant halt 00:42:51.172 ==> default: Halting domain... 00:43:01.189 [Pipeline] sh 00:43:01.468 + vagrant destroy -f 00:43:04.752 ==> default: Removing domain... 00:43:05.700 [Pipeline] sh 00:43:05.981 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:43:05.989 [Pipeline] } 00:43:06.006 [Pipeline] // stage 00:43:06.012 [Pipeline] } 00:43:06.029 [Pipeline] // dir 00:43:06.035 [Pipeline] } 00:43:06.051 [Pipeline] // wrap 00:43:06.057 [Pipeline] } 00:43:06.073 [Pipeline] // catchError 00:43:06.082 [Pipeline] stage 00:43:06.084 [Pipeline] { (Epilogue) 00:43:06.099 [Pipeline] sh 00:43:06.380 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:43:24.481 [Pipeline] catchError 00:43:24.483 [Pipeline] { 00:43:24.498 [Pipeline] sh 00:43:24.778 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:43:24.778 Artifacts sizes are good 00:43:24.786 [Pipeline] } 00:43:24.803 [Pipeline] // catchError 00:43:24.814 [Pipeline] archiveArtifacts 00:43:24.821 Archiving artifacts 00:43:25.176 [Pipeline] cleanWs 00:43:25.187 [WS-CLEANUP] Deleting project workspace... 00:43:25.187 [WS-CLEANUP] Deferred wipeout is used... 00:43:25.193 [WS-CLEANUP] done 00:43:25.195 [Pipeline] } 00:43:25.212 [Pipeline] // stage 00:43:25.218 [Pipeline] } 00:43:25.234 [Pipeline] // node 00:43:25.240 [Pipeline] End of Pipeline 00:43:25.287 Finished: SUCCESS